text
stringlengths
3.22k
165k
Small Non-coding RNAs Associated with Viral Infectious Diseases of Veterinary Importance: Potential Clinical Applications MicroRNAs (miRNAs) represent a class of small non-coding RNA (sncRNA) molecules that can regulate mRNAs by inducing their degradation or by blocking translation. Considering that miRNAs are ubiquitous, stable, and conserved across animal species, it seems feasible to exploit them for clinical applications. Unlike in human viral diseases, where some miRNA-based molecules have progressed to clinical application, in veterinary medicine, this concept is just starting to come into view. Clinically, miRNAs could represent powerful diagnostic tools to pinpoint animal viral diseases and/or prognostic tools to follow up disease progression or remission. Additionally, the possible consequences of miRNA dysregulation make them potential therapeutic targets and open the possibilities to use them as tools to generate viral disease-resistant livestock. This review presents an update of preclinical studies on using sncRNAs to combat viral diseases that affect pet and farm animals. Moreover, we discuss the possibilities and challenges of bringing these bench-based discoveries to the veterinary clinic. sncRNAs in Veterinary Infectious Diseases Frontiers in Veterinary Science | www.frontiersin.org miRNA species can target the same mRNA (cooperativity) and that one miRNA can target hundreds of mRNA species (multiplicity) (2). The binding of miRNAs to the 3′ untranslated region (UTR) of particular mRNA leads to either mRNA degradation or protein translation repression (3). miRNAs can be highly regulated both in pattern and degree of expression across multiple animal diseases. Targeting hundreds of host and pathogen encoded genes, a single miRNA can influence the gene networks essential for development and progression of a disease condition (4). This, coupled with their high degree of conservation, has made miRNAs attractive candidates for clinical application to combat pathogenic animal viruses. Being highly stable, they can be used as disease biomarkers (5). The availability of chemically synthesized miRNA mimics and agonists and vector-based RNA interference (RNAi) technology raised the idea of therapies based on non-coding RNA and made it feasible to utilize this approach to create genetically modified animal breeds that are resistant to certain viral pathogens. In this review, we summarize the current state of laboratory studies geared toward clinical applications of sncRNAs [miRNAs, small interfering RNAs (siRNAs), and short hairpin RNAs (shRNAs)] to diagnose and combat viral diseases that affect animals of veterinary importance and may thus impact animal and human health. Potential Biomarkers The emerging correlation between miRNA expression and disease pathogenesis and outcomes suggests the potential use of miRNAs as biomarkers. In the first report that described the role of a miRNA as a diagnostic and prognostic marker in humans, Takamizawa et al. demonstrated that in patients with lung cancer, lower let-7 levels predicted a significantly worse prognosis after potentially curative resection (6). The intense use of advanced genomic technologies has resulted in rapid progress in human personalized medicine, where biomarker studies play a central role. Similar research interest has been emerging in veterinary medicine, albeit with some delay. Indeed, Henry et al. reported in 2010 that biomarker studies in veterinary medicine were still lagging behind those in humans (7). Biomarker research in the field of veterinary medicine focuses on the health and welfare of farm and companion animals as well as broader aspects, such as the biosafety of animal-derived food and milk production. Generally, the potential applications for biomarkers in veterinary clinics include diagnosis, staging, prognosis, and monitoring responses to therapy. Although several well-established biomarkers have been recognized for a number of veterinary viral diseases, there are still many barriers. As one example of many, lack of specificity has been recorded when using acute phase proteins (APPs) as biomarkers in pig, horse, and cattle suffering from inflammatory conditions that may have infectious etiologies, such as foot-andmouth disease virus (FMDV) infection, porcine reproductive and respiratory syndrome virus infection, pneumonia, arthritis, enteritis, and post-castration inflammation (8)(9)(10). The concentration of some available biomarker molecules is affected by animal age (11). Thus, there is a need for additional, improved biomarkers for animal diseases. There is growing recognition that miRNAs may provide a specific signature that reflects the existence of a given clinical state. In this regard, the results of profiling the fluctuation of miRNA expression levels in infected organs, tissues, or single cells compared to uninfected ones throughout the course of a disease might reflect severity and outcome of the disease, including the likelihood of response to a given therapy. As biomarkers, miRNAs represent ideal candidates owing to their biological and clinical relevance, practicality, and consistent correlation with disease activity. The biological rational behind using miRNAs as biomarkers arises from their involvement in diverse physiological and pathological processes. miRNAs are extremely practical, with many advantages over other currently used biomarkers. For instance, there are efforts to develop them into diagnostics for the differentiation between viral and bacterial infections, each of which typically requires different interventions, such as quarantining versus feeding of antibiotics. The smaller number of identified miRNAs (12), compared to the approximately 30,000 protein-encoding genes currently known, implies that computational approaches dealing with miRNAs would be simpler and would require fewer resources than proteomics-or mRNA-based approaches. Another merit of miRNAs is their resistance to degradation by ribonucleases. For instance, they are stable in formalin-fixed, paraffin-embedded tissue (FFPE) independent of formalin fixation time and duration of tissue block storage (13). In contrast, mRNAs are highly fragmented and unstable in FFPE, which is problematic when FFPE is the only available sample type or when long storage of FFPE blocks has led to mRNA degradation (13). miRNAs can be detected in a large number of easily accessible samples, such as tissue biopsies, whole blood, blood cells, cerebrospinal fluid, saliva, urine, and other body fluids. Circulating miRNAs have proven to be highly resistant against RNAse activity, extreme pH, and temperature, and certainly more so than mRNAs. This is, at least in part, because they are often contained in lipid vesicles (microvesicles and exosomes) or bound by RNA-binding proteins (5). Additionally, miRNAs resist prolonged exposure to room temperature and repeated freezing/ thawing cycles. Some miRNAs may be uniquely expressed only in specific body fluids, as exemplified by miR-224 (plasma/serum), miR-637 (tears), miR-193b (breast milk), and miR-508-5p (seminal fluid) (14). As opposed to miRNAs, proteins are a much more complex family of molecules due to use of alternate reading frames, splice variants, and various post-translational modifications, and many proteins of interest are of low abundance and/or may display major sequence variations among clinically relevant species (5). MicroRNAs as Shared Biomarkers in Human and Animal Disease In order to assess the role of miRNAs as a class of shared biomarkers, it is important to investigate the cross-species conservation and regulation of the same miRNA species or miRNA family. Most of the annotated miRNAs are evolutionarily conserved among a variety of organisms, particularly in their mature form, FiGURe 1 | Sequence alignment of the mature form of miR-146b-5p among animals and humans. The open box illustrates the high degree of conservation of the seed region of miR-146b-5p. Accession numbers are according to miRBase 21 (12). suggesting that the majority of miRNAs constitute a large class of predominantly orthologous or homologous molecules. As exemplified by miR-146b-5p, cross-species variation in miRNA sequence is typically observed in 1 or 2 nt in the periphery of the mature form and in its 3′ UTR, i.e., away from the highly conserved seed region (Figure 1). This unique conservation pattern might be attributed to the conservation of their genomic origin. It has been reported that a consensus motif of 7-8 nt upstream and downstream of the pre-miRNA hairpin was found to be conserved among nematodes (15). Researchers from Slovenia and the USA have put together a catalog to describe the integrated assembly of intragenic miRNAs and their host genes in humans, mouse, and chicken (16). They showed that several miRNA genes were located within homologous areas, which implies that miRNA colocalization, co-expression, and potential coregulation may be conserved broadly across evolution and thus be applicable to both animal and human diseases. In the same context, previous studies reported that 300 canine miRNAs are homologs of annotated human miRNAs and that miRNA clusters are usually conserved between humans and dogs (17). Using next generation sequencing, Li et al. indicated that miRNAs in immune organs of chicken and duck were about 99% conserved (18). To gain further insights into miRNAs that are shared between species and might be used as common biomarkers, we selected a group of miRNAs that are commonly expressed upon influenza A virus (IAV) infection in humans and chicken. These miRNAs were further analyzed with miRviewer (19), a database that includes all known miRNAs of currently annotated animal genomes. William Pearson's aligning program was used to assess the degree of conservation of mature miRNAs between the two species ( Table 1). The percentage of sequence identity was further confirmed by the Bioedit sequence alignment editor (20). Indeed, there is a high degree of conservation of most miRNAs between the two species ( Table 1). For some miRNAs, there are sequence differences between humans and chicken in the form of deletion or addition of extra nucleotides, but these are mostly located outside the seed region. We speculate that conserved miRNAs might be the most promising candidates for universal biomarkers that may help in simultaneously pinpointing a given disease state in both species. In contrast, the non-conserved miRNAs might have the least contributory role as universal biomarkers but may play roles in more species-specific aspects of disease pathogenesis and outcomes. Apart from the sequence conservation of miRNAs, the presence of the same miRNA signatures in both humans and animals upon contracting the same infectious disease supports the concept of common biomarkers. Taken together, these observations indicate that cross-species comparisons of human and animal miRNA expression profiles as well as their conservation could provide unique opportunities to exploit miRNAs as universal biomarkers and also underline both commonalities and differences in pathology of the same disease in different species. Limitations in Using miRNAs as Biomarkers to Combat Viral Diseases While using miRNAs as novel biomarkers in the veterinary field represents a promising concept, it comes with unique challenges. One challenge is the presence of miRNA isomers (isomiRs), i.e., forms of miRNA that differ slightly from the annotated mature sequence. They are likely created by the enzymatic addition of adenine, cytidine, or uridine and/or imprecise cleavage by the enzymes Dicer or Drosha (21). Observational studies have shown that isomiRs can be regulated upon infection and hence are biologically and functionally meaningful (22). There is some functional overlap between miRNAs and their isomers (23). However, most miRNA annotation tools ignore these isomers by considering them as either noise or sequencing artifacts. The presence of isomiRs might affect miRNA stability and repression capability (24) and therefore reduce their value as biomarkers. When looking for miRNAs as circulating biomarkers, it is important to consider the low miRNA yield (1-10 ng/μl) in body fluids, such as plasma, serum, and urine (25). While some studies suggested that plasma contains a higher miRNA concentration than serum (26), a growing body of evidence has indicated that using serum as a biological sample for miRNA biomarker studies might be biased (27). This is because the stress that blood cells are exposed to during coagulation results in the release of nucleic acids, including miRNAs into the serum, which may change the true repertoire of circulating serum miRNAs giving rise to biased values. With this in mind, the lack of correlation in detection of some miRNAs in plasma and serum is not unexpected. Prior centrifugation of the blood and hemolysis might affect the amount and stability of the target miRNA and require some modifications in the isolation protocols (26). Moreover, difficulties in miRNA extraction can compromise yield and quality (28). Considering that miRNAs are differentially expressed among different animal breeds (29,30), it is plausible that miRNA levels may differ among different animal breeds if they contract the same disease. This is also apparent among humans where the expression of some miRNAs was found to be related to ethnicity. In this regard, receiver operating characteristic (ROC) curve analysis indicated that let-7c predicted the onset of breast cancer with an area under the curve (AUC) of 0.99 in African Americans while having an AUC of only 0.78 in Caucasians. On the other hand, the best predictor in Caucasians was miR-589 with an AUC of 0.85 (31). This holds true for other biomarkers as well. Two reports documented a significant breed effect on the level of plasma NT-proBNP, a diagnostic marker in dogs with degenerative mitral valve disease (DMVD) (32,33). RNA interference as a Promising Tool for Therapeutic intervention RNA interference is a form of post-transcriptional gene silencing that can function in a broad range of eukaryotic species. Fighting animal viruses with RNAi can be mediated by using siRNA or miRNAs, although the origin of both molecules is different. While miRNAs are endogenously produced throughout two processing steps in nucleus and cytoplasm, siRNA can be exogenously introduced directly into the cytoplasm as a double strand (34). Once in the cytoplasm, both miRNA and siRNA pass through the same processing steps where they are digested by the Dicer enzyme to form a duplex. Only one strand of this duplex is translocated into the RNA-induced silencing complex (RISC) to mediate its function (35). While siRNA forms a perfect complementarity with its target mRNA, causing its cleavage, miRNAs tend to bind to their mRNA targets less perfectly leading to repression in translation. Historically, laboratory-based experiments of using RNAi to block the replication of animal viruses started early on, namely in 2003 against IAV (36). Harnessing miRNAs for therapeutic use will rely on using gain and loss of function and is linked to the expression level of the miRNA (35). miRNAs that are beneficial for the virus and are up-regulated upon infection might be blocked using classic or modified anti-miRNAs (37). In this regard, antagomiRs (cholesterol conjugated anti-miRNAs) have been used in vitro and in vivo (38). Chemically modified nucleotides, such as locked nucleic acid (LNA), and other modifications have made it conceivable to design more stable and specific oligonucleotides. In an in vivo system, reports stated that the effect of using LNA proved to be long-lasting and safe, as neither toxicity associated with LNA nor histopathological changes were detected (39). Although there are attempts to downregulate the Dicer or Drosha enzymes as indirect ways to block miRNAs, this mechanism should be strictly controlled since blocking these enzymes will affect the entire miRNA population (40). In cases where miRNAs tend to inhibit virus replication, a therapeutic approach could be to over-express these miRNAs or to restore their levels. In this context, synthetic miRNA mimics resembling mature miRNAs that could be recognized by RISC would be a suitable tool (40). The in vivo delivery of miRNA modalities to specific cells has remained a substantial barrier. Using viruses or virus-like vectors might be innovative approaches since viruses have evolved over many generations to infect certain cells and to deliver foreign RNA, including miRNA, in a tissue-and sncRNAs in Veterinary Infectious Diseases Frontiers in Veterinary Science | www.frontiersin.org cell-specific manner (41). Viral vectors can express pri-miRNA or pre-miRNA-like structures or even mature miRNA. Here, RNA viruses of both nuclear and cytoplasmic origin have been utilized (42). miRNAs may have advantages over siRNAs as therapeutic candidates. In spite of having off-target effects, miR-NAs bind to their targets with partial complementarity (43) and, thus, likely tackle the high rate of mutation seen in many viruses better than siRNAs. Also, siRNAs can trigger interferon production as part of a cellular stress response pathway that can cause translation arrest, growth inhibition, and cytotoxicity (44). In contrast to the shRNA approach, the use of miRNAs enables the expression of multiple miRNAs from a single transcript as compared to only one in regular shRNA vectors. Indeed, transfection of cells with two different shRNAs may lead to competition of the two for transport and incorporation into the RISC, resulting in a reduction in shRNA processing and activity (45). Despite reports on efficient silencing of genes using RNAi, differences in the efficacy of a given vector between experiments have been reported. This might be due to inefficient cellular uptake of the RNAi and may also depend on the cell type. What follows is an overview and update of the in vitro and in vivo experiments aiming at evaluating the potential use of small RNAs, including miRNAs, as a treatment option against viral diseases that affect animals of agricultural and/or economic importance. Influenza A Virus Infection with IAV is a worldwide problem that affects both human and animal health (46,47). The presence of multiple viral genotypes and the possibilities of antigenic shift and drift continue to raise concerns about the pandemic potential (48,49). Current influenza vaccines and therapies have proved to be inefficient to combat the continuously evolved IAV strains due to the occurrence of antigenic variation within influenza virus genomes due to point mutations (drift) or re-assortment (shift) (50,51). The emergence of resistant virus strains added another limitation to anti-IAV therapies (52). RNAi formulated in an appropriate agent would offer the potential for a new therapy by targeting viral transcripts. Furthermore, inserting a let-7b response element within the H1N1 genome created an attenuated strain that conferred protection in mice against challenge with a lethal strain, suggesting that the attenuated strain might serve as a live-attenuated vaccine (53). Around 13,500 possible siRNA target sites are present in the IAV genome. Recent reports described the usefulness of methods and procedures to select highly effective influenza-specific siRNAs in cell culture, mice, and ferrets (54). Using in silico approaches, Raza and colleagues identified five conserved amino acid sequences, three in the hemagglutinin (HA) gene (RGLFGAIAGFIE, YNAELLV, and AIAGFIE) and two in the neuraminidase (N) (RTQSEC and EECSYP) gene, which might provide potential RNAi-based therapeutic targets in various IAV strains (55). RNAi has been shown to be effective in suppressing IAV replication both in vitro and in vivo. For instance, transfecting MDCK cells with siRNA specific for nucleoprotein (NP, nucleotide positions 1496-1514) or polymerase acidic (PA, nucleotide positions 2087-2106) mRNA sequences inhibited IAV replication (36). Moreover, a mixture of siRNAs specific for highly conserved regions of NP and PA can protect mice from lethal challenge with IAV of the H5 and H7 subtypes [e.g., Ref. (56)]. siRNA against the matrix 2 (M2) gene exhibited similar or slightly higher reduction in virus replication in MDCK cells and in human HEK293 cells (57). Likewise, IAV titers in MDCK cells and in embryonated eggs were reduced more than 50-and 100-fold, respectively, when shRNA targeting the polymerase basic 1 (PB1) gene was transfected in vitro and in vivo using a liposome-encapsulated pSIREN/PB1 vector. In mice, the survival rate ranged between 50 and 100% (58). In another experiment, siRNA targeting a region of the M1 gene between nucleotides 331 and 351 was found to be the most effective in inhibiting M1 protein translation in cell lines. Inhibiting the viral M1 protein using this siRNA caused an 80% reduction in viral titers in supernatants of siRNAtransduced MDCK cells at 6, 8, and 10 hpi. Furthermore, virus budding ability was reduced by 40%, suggesting the ability of siRNA targeting the M1 protein to suppress IAV replication (59). Another report demonstrated the efficacy of anti-NP and anti-PA shRNAs in reducing IAV titers in MDCK cells and in avian CH-SAH cells. Significant decreases of up to 80% in the levels of IAV NP mRNA and up to 370-fold in viral titer were observed in the CH-SAH cells. The approach also worked well in MDCK cells, as demonstrated by significant decreases up to 90% in the level of viral mRNA, and up to 106-fold in IAV infective titer. Furthermore, the authors identified a novel, highly efficient, and conserved RNAi target site in the viral NP gene, which can be used in antiviral cocktails of shRNAs to prevent IAV escape from RNAi silencing (60). Zhou and colleagues investigated the silencing effect of M2 and NP-specific siRNAs on IAV (H5N1, H1N1, and H9N2) replication in cell lines and mice (61). In the cell lines, a 0.51-1.63 TCID50 reduction in virus titers was observed, and delivery of pS-M48 and pS-NP1383 significantly reduced lung virus titers in the infected mice (16-to 50-fold reduction in titer) and partially protected them from lethal IAV challenge. As an alternative approach, targeting host cell genes that are crucial for IAV replication can be conducted to control the virus. Expression of α2,3-linked (avian-type) and α2,6-linked (human-type) sialic acid (SA) receptors on host tissues is considered one of the host range and tissue tropism determinants of influenza viruses. An siRNA duplex was used to inhibit IAV binding and internalization via silencing ST6GAL1 gene that encodes the β-galactoside α-2,6-sialyltransferase I (ST6Gal I), a protein important in SA receptor formation (62). In addition, targeting cellular proteases has been discussed as a method to suppress IAV replication. Rogers and colleagues studied pulmonary miRNA expression in mice infected with the IAV H5N1 strain and verified that furin, a member of the convertase family that mediates cleavage of hemagglutinin, is a target gene for miRNAs upon H5N1 infection (63). This highlights the importance of using miRNAs as potential therapeutic agents against IAV. Venezuelan Equine Encephalitis Virus Venezuelan equine encephalitis virus (VEEV) belongs to the genus alphavirus in the family Togaviridae. This virus is still endemic in many parts of the world and is considered an emerging disease threat in other parts as well as a potential biological weapon (64). So far, there are no US Food and Drug sncRNAs in Veterinary Infectious Diseases Frontiers in Veterinary Science | www.frontiersin.org Administration (FDA) approved drugs or vaccines against VEEV. Thus, developing artificial miRNAs that can be used to control VEEV infection is a step in the right direction. Indeed, VEEV has been targeted efficiently by siRNA (65). Most recently, it was shown that targeting the viral non-structural protein-4 (nsp-4) region with miRNAs in BHK-21 cells efficiently inhibited viral replication, with artificial miR-3 having the greatest effect (66). This study indicated that these artificial miRNAs merit further testing in animal models for antiviral therapies against VEEV infection. Foot-and-Mouth Disease Virus Foot-and-mouth disease (FMD) is a highly infectious viral disease that usually affects cloven-hoofed animals. The direct impact of an FMD outbreak includes great losses to agricultural production and disruption of local economies, while the indirect effects lie in the disease control measures at both local and global levels and the high cost of disease control and prevention programs. FMDV has an RNA genome and many serotypes, and targeting conserved viral genes, such as 3D, VP4, and 2B, is a major aim in order to control FMD (67). The use of peptide-conjugated morpholino oligomers (PPMOs) and miRNAs with sequences complementary to various segments of the FMDV genome effectively blocked viral replication in cell culture models (68). Likewise, DNA vector-based RNAi technology can specifically suppress the expression of the VP1, 3D, VP4, and 2B genes and thus inhibit viral replication in vivo and in vitro (67,69). Using adenovirus-based vectors to express siRNA molecules in cell lines and mice, Kim et al. suggested to apply RNAi treatments before and after infection with FMDV (70). Treatment after FMDV infection inhibited viral replication effectively, but a combination of treatment before and after infection gave the best results in pig kidney cells, IBRS-2 cells, and in suckling mice, as evidenced by lower viral titers in cell lines and higher survival rates of the treated mice. These experiments did reveal that the RNAi method took considerable time to induce a silencing effect, which ranged from 24 to 48 h (71,72). This is considered a limitation when attempting to control certain rapidly spreading contagious diseases, including FMD, as viral spread will be faster than the inhibitory action of the RNAi. Finally, the use of artificial miRNAs (amiRs) resulted in specific silencing of reporter genes fused to FMDV target sequences (73). Classical Swine Fever Virus Classical swine fever virus (CSFV) can cause a hemorrhagic disease in pigs characterized by disseminated intravascular coagulation, thrombocytopenia, and immunosuppression (74,75). CSFV has been recognized for nearly 200 years and now appears to have been eradicated in Europe and North America due to vaccinations and other control measures. The first study of using siRNA in blocking CSFV replication was conducted in 2008 (76). Three siRNA molecules targeting different regions of the CSFV Npro and NS5B genes were prepared and transfected into PK-15 cells. They caused a 4-to 12-fold reduction in viral genome copy number. In another study, synthetic siRNA transfected into swine kidney cells (SK-6) could target nucleotides 1130-1148 in the nucleocapsid protein (C) of the CSFV with subsequent reduction in viral titer compared to either mock-treated or non-treated cells (77). This emphasizes the potential of siRNA to inhibit CSFV replication. Clearly, in vivo experiments need to be conducted to confirm this effect. Rabies Virus Rabies is a zoonotic disease caused by rabies virus (RV), a member of the Rhabidoviridae family. The disease typically infects canines (78) and is usually transmitted by animal bites, causing a lethal encephalitis. The annual number of deaths due to rabies has been estimated to be approximately 59,000 (79). The control of RV in wild carnivores has moved from culling operations to parenteral and oral vaccination of susceptible species (80), but inhibiting viral replication with siRNA or miRNAs may be another promising approach. Cell lines have been used to assess the usefulness of siRNA in inhibiting RV replication either by using a pool of siRNAs (81) or by single and multiple artificial miRNA targeting RV nucleocapsid (N) (45). In these in vitro assays, there was a comparable virus reduction at 72 h post-infection, especially when a single miRNA completely matched the target. Similar results were reported by others [e.g., Ref. (82)]. In cultured cells and murine model, RV glycoproteins were proved to be essential for trans-synaptic viral spread between neurons (83). This observation encouraged other researchers to target the genes encoding such glycoproteins. Sonwane et al. studied the ability of adenovirus-based siRNAs, delivered to BHK-21 cells, to inhibit RV replication and subsequently tested this approach in mice (84). In this study, siRNA inhibited viral replication in cell lines and mice. In BHK-21 cells, siRNA targeting the RV polymerase gene (L gene) was found to be more effective than siRNA targeting the RV NP (N gene) in inhibiting and reducing RV replication. Specifically, a 48.2% reduction of RV foci was seen in cells, in which the L gene was targeted versus a 41.8% reduction when the N gene was targeted. A significant, even greater, difference was observed at the mRNA level (17.8versus 5.7-fold reduction). In mice, inoculation of both siRNA vectors resulted in a 50% protection against a subsequent lethal RV injection. siRNAs simultaneously targeting the glycoprotein G and N genes led to an 87% reduction in viral release, demonstrating that siRNAs directed against different targets may act synergistically and increase efficacy of siRNA-based interventions against RV (85). Taken together, the above results do suggest that use of siRNAs constitutes a promising approach to interventions against RV. Viral Diseases of Fish Viral infection in fish aquaculture can be devastating and costly (86). Early reports of RNAi-based treatments described use of this technology in fish and shellfish in 2008 (87). In fish betanodavirus, there are two amino acid residues in the B2 protein (R53 and R60), which bind viral RNA to circumvent the RNAi pathway, underscoring the importance of the antiviral role of the host RNAi machinery (88). Dang (89). In this study, siRNA introduced into cells infected with red seabream iridovirus specifically and effectively bound to mRNA encoding the virus major capsid protein, leading to a reduction in the production of virus particles in the supernatant of virus-infected cells, as compared to the cells receiving the control treatment. These results provide encouraging evidence that siRNA technology might be used to control fish viral diseases. More recently, a shRNA construct was found to inhibit the proliferation of viral hemorrhagic septicemia virus by targeting its G gene in a sequence-specific manner (90). Infection with herpesvirus 3 causes severe financial losses in the common carp and koi culture industries worldwide (91). Although most investigations have employed in vitro approaches, RNAi might be a promising tool to combat herpesvirus 3 in carp. For instance, a pool of siRNAs specific for DNA enzyme synthesis and capsid proteins of cyprinid herpesvirus 3 virus can be a potential inhibitor of virus replication in carp fibroblasts (92). Along the same line, Gotesman et al. demonstrated that siRNAs can inhibit the thymidine kinase and DNA polymerase genes of cyprinid herpesvirus 3, causing decreased release of viral particles from transfected common carp brain cells (93). Viral infection in shrimp constitutes a great problem, and excellent reviews have discussed the use of RNAi in controlling various viral infections in shrimp [e.g., Ref. (94)(95)(96)]. Potential Use of RNAi to Create Genetically engineered virus-Resistant Animals Genetic selection has been successful in mediating remarkable progress in livestock improvement. Genetic engineering of livestock is commonly used to produce pharmaceuticals or to enhance production characteristics of animals but has also proven to be important in producing animals with infectious disease resistance. For example, cows have been genetically engineered to be resistant against Staphylococcus aureus-induced mastitis (97), and laboratory investigations have been conducted with regard to creating α-herpesvirus-resistant livestock (98). Furthermore, there are efforts to create livestock resistant against gastroenteritis coronavirus infection, but published studies are limited to work with mice (99). Against IAV infection, two potent lentivirus-based shRNAs targeting the NP and PA genes of IAV were used to generate IAV-resistant mice (100). However, a successful challenge experiment has not been reported in this system. Subsequent studies based on inhibiting genes of other pathogens have been conducted (61). With improved RNAi techniques, it is conceivable that genetically engineered diseaseresistant animals, based on siRNA or shRNA technology, may someday become reality in veterinary infectious disease medicine. Even prion diseases have been the target of transgenicanimal technology featuring shRNAs. Golding and colleagues attempted the use of siRNA technology to generate prion-resistant goat and cattle (101). First, they designed a lentivirus-based shRNA tagged with green fluorescent protein (GFP), which was directed against caprine prion protein precursor (PrP c ) mRNA and then transfected this vector into an adult goat fibroblast cell line. These cells were then used for somatic nuclear transfer to produce transgenic goat embryos for subsequent in vitro differentiation in various stages of pre-implantation development. They confirmed the silencing capacity of shRNA in brain tissue of the growing fetus compared to an age-matched normal fetus. The authors observed an approximate 90% reduction in the expression of PrP c . However, clinical efficacy in reducing the risk of a neurodegenerative disease was not determined, and data regarding efficacy were not presented. This suggests that this technique had surpassed a major technical hurdle. Furthermore, two studies described the efficacy of RNAi to silence FMDV in transgenic bovine fetal epithelium cells (BFEC), although rigorous negative controls were lacking, making it difficult to ascribe any effects to the transgenic manipulations. The first of these was conducted by Wang et al., who describe the construction of three recombinant lentiviral vectors containing shRNA against VP2 (RNAi-VP2), VP3 (RNAi-VP3), or VP4 (RNAi-VP4) of FMDV and subsequent testing of their silencing power in both 293 and BHK-21 cells (102). The lenti-RNAi-VP4 vector was transfected into bovine fetal fibroblast cells. The stably transfected cells were transferred into enucleated oocytes, and the reconstructed embryos were then transferred to recipient cows. shRNA expressed in transgenic fetuses significantly degraded viral RNA after inoculation with FMDV at a titer of 100 TCID50 and inhibited viral replication. Thus, primary transgenic bovine fetus tongue epithelium cells became much more resistant to FMDV challenge. In the second report, a shRNA-expressing lentiviral vector targeting VP1 of FMDV resulted in strong suppression of VP1 protein expression in 293T cells and also significantly inhibited viral replication in BHK-21 cells (103). The construct was then transfected into bovine fetal fibroblast cells. Cloning these somatic cells resulted in 3-month-old transgenic fetuses. FMDV RNA synthesis and viral replication were significantly reduced in primary tongue epithelial cells from the transgenic fetuses, suggesting that RNAi technology can be potentially used to generate transgenic cattle resistant against FMDV. Taken together, the studies summarized above support the idea that transgenic cloning may prove to be a useful tool to deliver antiviral and anti-prion RNAi to the germ line of animals of veterinary importance, but substantial additional work remains to be done before this technology may demonstrate efficacy in veterinary practice. ReMAiNiNG CHALLeNGeS Despite the excitement about utilizing non-coding RNAs to combat animal viral diseases, considerable challenges still need to be overcome before they can be used clinically. Animal breeders tend to rear their flocks in large groups under intensive or semi-intensive husbandry or on large farms. It would be wasteful in terms of money, time, and labor to deliver these expensive molecules on an individual basis. In this case, most veterinarians prefer to use antiviral therapies in a common source bio-vehicle, for instance, food, water, or air, to ensure quick accessibility. We think that using individual miRNA-based therapies will be more practical in special cases, such as the following: race horses, the very expensive parent flocks of chickens and turkeys that are intended for production of specific pathogen-free (SPF) eggs, purebred domestic animals kept as stock for distributing semen for artificial insemination, and cross breeding and improving certain animal traits for meat, milk, or fat production. Controlling contagious viral diseases, for instance, FMDV and IAV, necessitates a rapid intervention strategy to prevent virus spread from one farm to another and from animals to human. In this regard, RNAi that produces the inhibitory effect within 1 or 2 days in cell lines is considered to be insufficient, and a more rapidly operating approach is needed. Another technical challenge is that the excessive levels of the introduced miRNAs can saturate the internal host processing machine for other host small RNAs giving rise to toxicity, pathology, and mortality to the animal under therapy (104). Therefore, the dose of the introduced RNAi-based therapy should be well controlled. The delivery of the RNAi molecule is a key roadblock in this whole process. This is because RNAi molecules are negatively charged and do not penetrate the cell membrane effectively, a step that is necessary for subsequent silencing of mRNAs in the cytoplasm (105). Additionally, they may be quickly excreted, of low stability, non-tissue specific, and may have an inefficient intracellular release (106). Although the delivery of the silencing molecule may be mediated via vectors, suboptimal vector selection might reduce the silencing effect. Many delivery systems, such as nanoparticles, cationic lipids, calcium phosphate, antibodies, cholesterol, and viral vectors, have been tested (107). From another perspective, the use of a single RNAi silencing molecule with a low percent match with the target mRNA would lead to a poor target reduction. Possible solutions include either applying only one siRNA which is 100% identical to the sequences of interest or applying more than one siRNA sequence targeting different conserved regions of the target gene. In the case of IAV, spontaneous mutations were estimated to occur at a rate of approximately 1.5 × 10 −5 per nucleotide per infection cycle (50), suggesting that target sequence mismatches will arise inevitably. Another challenge is to develop a universal RNAi molecule against the same sequence in multiple influenza strains. Some viruses may evolve mechanisms to circumvent the targeting RNAi molecule, either by expressing virus-encoded suppressors or by mutation (108). In order to avoid this, scientists have tried to design RNAi molecules that simultaneously target several sequences within a viral gene (109). In practice, in the fish aquaculture system, RNAi-based therapy have demonstrated some limitations. As a rearing system in some fish farms, the rearing cages are kept floating in the sea or river water, the so-called open sea or river cage aquaculture. Under such system, introducing RNAi molecules into fish feed will allow settlement of the uneaten food, containing the therapy, to the bottom of the water body. This would be ineffective and would also make the feed available to non-target organisms (110). Thus, an alternative improved approach would be to use RNAi in land-based ponds or tanks, owing to their direct accessibility to fish and the easy disposal of waste materials. The commercial field application of injectable therapy is neither practical nor realistic, especially with shrimp, which are reared in an intensive system. Despite its relatively high expense, soaking the shrimp in a solution containing the RNAi silencing molecule is a more practical way to ensure that an effective suppression of the gene is achieved (111). Unfortunately, there are no shrimp cell lines available for the research community, delaying a better understanding of the RNAi application in shrimp farms. Effective design of the RNAi molecule is also of special concern. Although various computational tools have been developed to systematically evaluate the targets for miRNAs and or siRNA (112)(113)(114), non-specific off-target effects need to be anticipated. The many parameters that influence specificity of miRNAs/siRNAs include the selected target region, size, the starting nucleotide, GC content, the thermodynamic properties of the introduced molecule, and the presence of internal repeats. Apart from an effective design, the use of accurate positive and negative controls is necessary to ensure the validity of RNAi data (115). FUTURe DiReCTiONS From the evidence gathered thus far, we have every reason to be optimistic about the future use of sncRNAs in the diagnosis, monitoring, and treatment of animal viral diseases. Zoonotic viruses continue to pose a public health threat to humans. There are miRNAs that are associated with zoonotic viral diseases that were found to be conserved among the human and animal reservoirs and exhibit similar tissue tropism. It is import to investigate both the contribution of these miRNAs to the zoonotic nature of diseases and their potential roles as biomarkers or therapeutic tools for humans and animals. This is even more important for viral diseases affecting poultry populations that are reared under both intensive and semi-intensive systems, where the pathogens can be transmitted in a short time to populate the environment and infect susceptible hosts. Regarding the use of RNAi in combating viruses, the search for a target sequence conserved across strains is of highest priority in studies targeting animal viruses, in particular, those featuring rapid genomic changes, such as IAV and other RNA viruses. However, using a pool of various siRNAs or a cocktail of siRNAs specific for virus and host genes might reduce escape of mutant viruses. In addition, it would be valuable to develop more rapidly acting RNAi technology to inhibit spread of highly contagious infections, such as FMD. Prospectively, incorporating the RNAi molecule into animal feed or the water supply might be a practical choice for the treatment of animals reared in large numbers, such as fish or poultry. Using this strategy, successful experiments have been recorded in shrimp infected with white spot syndrome virus (WSSV) (116). In spite of the extensive efforts toward formulating a suitable vehicle, one that delivers the smallest RNAi quantity in a non-toxic way remains to be discovered. In this respect, the use of a natural exosome or a natural or synthetic high-density lipoprotein (HDLP) is a novel and promising approach. These are just a few areas of research that are likely to engage veterinary scientists and virologists for years ahead. These and other improvements should further facilitate the use of miRNA and siRNA to prevent and control animal viruses at veterinary clinical sites and in the field. CONCLUSiON Small non-coding RNAs have been known as crucial regulators of gene expression, and they have great potential for applications in the diagnosis, prevention, and treatment of (Figure 2). In this respect, the deregulation of miRNAs upon infection, their stability, and tissue specificity have made their study as biomarkers a fruitful area of research. siRNA molecules together with miRNA mimics or agonists can be delivered to the infected animal as a treatment option. Although there are currently no genetically engineered virus-resistant animals, the likelihood of exploiting RNAi technology, including miRNAs, is growing and is expected to help attain this aim. Bringing these molecules to the market will remain to be challenging and many barriers still need to be overcome. In fact, in vitro models would enable more detailed studies on the clinical relevance of these molecules. However, experimental animal models and infections of natural hosts in laboratory investigations will afford more realistic insights into the best ways to utilize sncRNAs to improve animal health. Importantly, developing animal-specific databases that contain experimentally validated small RNA molecules and related functional analysis will facilitate using these data for future research. The continual emergence of zoonotic viruses warrants effective collaborations between physicians and veterinarians in this issue. The available evidence suggests that the clinical use of sncRNAs in combating animal viruses may be possible in the not too distant future. AUTHOR CONTRiBUTiONS MS did the literature search, wrote the initial draft of the manuscript, and prepared the figures and tables. FP oversaw the project, edited the manuscript including the final version, and takes responsibility for the integrity of the data. FUNDiNG This study was supported by a German-Egyptian Research Long Term Scholarship (GERLS), a joint program between the German Academic Exchange Service (DAAD) and the Egyptian Ministry of Higher Education and Scientific Research (grant ID: A/11/92510) to MS, and by iMed, the Helmholtz Association's Initiative on Personalized Medicine (to FP).
Lateralized frontal activity for Japanese phonological processing during child development Phonological awareness is essential for reading, and is common to all language systems, including alphabetic languages and Japanese. This cognitive factor develops during childhood, and is thought to be associated with shifts in brain activity. However, the nature of this neurobiological developmental shift is unclear for speakers of Japanese, which is not an alphabetical language. The present study aimed to reveal a shift in brain functions for processing phonological information in native-born Japanese children. We conducted a phonological awareness task and examined hemodynamic activity in 103 children aged 7–12 years. While younger children made mistakes and needed more time to sort phonological information in reverse order, older children completed the task quickly and accurately. Additionally, younger children exhibited increased activity in the bilateral dorsolateral prefrontal cortex (DLPFC), which may be evidence of immature phonological processing skills. Older children exhibited dominant activity in the left compared with the right DLPFC, suggesting that they had already acquired phonological processing skills. We also found significant effects of age and lateralized activity on behavioral performance. During earlier stages of development, the degree of left lateralization appears to have a smaller effect on behavioral performance. Conversely, in later stages of development, the degree of left lateralization appears to have a stronger influence on behavioral performance. These initial findings regarding a neurobiological developmental shift in Japanese speakers suggest that common brain regions play a critical role in the development of phonological processing skills among different languages systems, such as Japanese and alphabetical languages. Introduction The ability to read is vital to modern life. The action of reading words requires several abilities, including phonological awareness, vocabulary, naming speed, and visual perception (e.g., Cunningham et al., 1990;Carver, 1992). Of these, phonological awareness is an important predictor of reading performance in the late stage of child development (Liberman et al., 1974). Phonological awareness refers to the ability to detect phonological structures in spoken or mentally recalled sounds and to discriminate between these and/or minimal units of the phoneme (Yopp, 1992;Torgesen et al., 1997). This awareness normally arises in the early stage of child development and continues to improve gradually during childhood. Deficits in phonological awareness are often seen in children with developmental dyslexia who have severe difficulties reading and writing (Liberman, 1973;Chiappe et al., 2001;Ramus, 2001). Atypical brain functions are thought to underlie such deficits in this population (Paulesu et al., 2001;Temple et al., 2001;Shaywitz et al., 2002). Several brain regions are thought to play a role in the processing of phonological information, which are revealed by recent neurophysiological studies using functional magnetic resonance imaging (fMRI) and electroencephalogram (EEG), (e.g., Bitan et al., 2007;Khateb et al., 2007;Kovelman et al., 2012). These include the left inferior frontal gyrus, superior temporal gyrus, left temporoparietal lobe, and fusiform gyrus (Temple et al., 2001;Shaywitz et al., 2002;Hoeft et al., 2006). While previous studies have reported on the involvement of these regions, the specific nature of this activity appears to be dependent on the component of the phonological process. For example, the left angular gyrus performs graphemeto-phoneme transformations (Bitan et al., 2007), while the left superior temporal gyrus is implicated in constructing phonological representations from serial auditory information (Buchsbaum et al., 2001). The left dorsolateral prefrontal cortex (DLPFC), including the inferior frontal gyrus, stores articulatory representations (Zatorre et al., 1992(Zatorre et al., , 1996 and is thought to play a critical role in the development of phonological awareness (Booth et al., 2004). The left DLPFC shows hyperactivity during phonological awareness tasks, even in very young children (e.g., ∼5 years of age, Kovelman et al., 2012) and the intensity of left DLPFC activity is related to phonological processing skills during childhood (Turkeltaub et al., 2003). The above findings are mostly taken from studies of alphabetic languages. Thus, it is not clear whether the same brain functions are involved in the development of phonological awareness in the Japanese linguistic system, which is markedly different from that of alphabetical languages. Of these regions, the left DLPFC needs first to be considered in Japanese children because this region is responsible for development of phonological awareness (Turkeltaub et al., 2003;Booth et al., 2004;Kovelman et al., 2012). However, DLPFC also contributes to several cognitive functions such as working memory, set-shifting, and inhibition (Barbey et al., 2013;Forbes et al., 2014) so that we have to rule out other cognitive functions except for phonological awareness when we measure activity of DLPFC. Here, we introduce experimental design of cognitive subtraction. For one instance, we subtract brain activity on tasks for phonological storing (i.e., baseline task) from the activity on other tasks for phonological storing and manipulations (i.e., experimental task), which enables us to extract the activity for phonological manipulation and to minimize the effect of other cognitive functions. Although this design has several limitations such as ignorance of interaction effects (e.g., Friston et al., 1996) and we should interpret results carefully, we can minimize number of experimental conditions and evaluate brain activity in young children without excessive mental and/or cognitive stresses. The Japanese language has a unique linguistic system with two kinds of characters, kana and kanji. While kanji (i.e., Chinese characters) are ideograms, kana are phonograms which serve as a base for fundamental characters in Japanese. The phonological units represented by kana are moras, which usually consist of a single vowel with/without a single consonant (V or CV). Kana has an extremely straightforward correspondence between graphemes and phonological units, such that a single kana character denotes a single syllable (i.e., mora, Wydell and Butterworth, 1999). In native Japanese speakers, phonological awareness develops in early childhood and becomes refined during middle childhood (Hara, 2001). A neuroimaging study of Japanese adults revealed that brain activation during phonological processing is dependent on the stimulus modality, for example, the bilateral superior temporal sulci activate in response to auditory stimuli and the bilateral temporoparietal lobes activate in response to visual stimuli (Seki et al., 2004). However, few studies have focused on the development of neural mechanisms underlying phonological awareness in Japanese children. Thus, it is unclear whether processing of phonological information in Japanese activates the same brain regions as for alphabetic languages during childhood. The present study aimed to reveal developmental changes in the neural activity underlying phonological processing in Japanese children. Previous studies have used several phonological processing tasks such as rhyming, phoneme identification, segmentation, blending, and manipulation (Yopp, 1992(Yopp, , 1995Stahl and Murray, 1994). We adopted the mora reversal task (Seki et al., 2008), which takes the specific characteristics of the Japanese language into account. The task requires participants to listen to one word and then to say the morae of the word in reverse order, meaning that the task requires a high level of phonological processing skill. We expected that the demanding nature of the task would be ideal for revealing developmental changes in ability and corresponding neural changes. We used near-infrared spectroscopy (NIRS) to measure brain activity in Japanese children during the task, as this technique is suitable for both young participants (Kita et al., 2011;Wigal et al., 2012;Kikuchi et al., 2013) and auditory experiments because of minimal noise (unlike other techniques such as fMRI). NIRS is also less affected by movement artifact (Kita et al., 2011) and useful for measuring brain activity during the tasks in which the subjects are required to respond orally because young children tend to move when they speak. Additionally, we can place NIRS probes on children's heads so easily and quickly that we can sample large number of children and reveal developmental shifts based on the reliable sample size. Given our task and measuring system, we hypothesized that the left DLPFC would play an important role in phonological processing in Japanese children, similar to that seen for alphabetic languages. Materials and Methods Participants A total of 103 right-handed native Japanese children (age range = 7.0-12.8 years, 52 females and 51 males) from a local public school were paid for their participation. We placed them into three age groups (Low, 7-8 years: Middle, 9-10 years: High, 11-12 years). We assessed verbal and non-verbal intellectual abilities using the number recall test (Kaufman Assessment Battery for Children, Matsubara et al., 1993) and Raven's Colored Progressive Matrices Test (Raven, 1976). The scores produced by the children in the three groups were within the normal range and consistent within the groups ( Table 1). All participants were free from neurological and psychiatric disorders, according to reports from their parents. Written informed consent was obtained from all participants and their parents prior to the experiments. The research protocol was approved by the ethics committee at the National Center of Neurology and Psychiatry (approval number 20-8-JI10). Stimuli and Tasks For auditory word stimuli we used 20 concrete nouns with a length of three morae (about 1 s). The nouns had degrees of imaginability greater than 5.8 out of 7.0 (Tokyo Metropolitan Institute of Gerontology and NTT Communication Science Laboratories, 2005), indicating that Japanese children could easily understand all of the words. We digitally recorded the stimuli, which were spoken by a female native Japanese speaker. The peak of the sound intensity was equalized across stimuli using Sound It! Basic for Windows (Internet Co. Ltd. Osaka, Japan; total root mean squares ranged between -10.82 and −15.52 dB). We conducted a mora reversal task (see, Figure 1A) using C++ Builder XE2 (Embarcadero Technologies Inc. San Francisco, CA, USA). In the task, each participant was asked to respond aloud to a stimulus in either a "Repeat" or "Reverse" condition. In the "Repeat" condition, the participant was required to simply repeat the stimulus (e.g., ta-i-ko to ta-i-ko), and in the "Reverse" condition, the participant was instructed to repeat the series of morae corresponding to the stimulus in reverse order (e.g., ta-i-ko to ko-i-ta). As noted in "Introduction" Section, we used cognitive subtraction with these two conditions and this experimental contrasts in these two conditions enabled us to assess brain activity associated with phonological manipulation. Additionally, we were able to remove irrelevant factors, such as oral responses or simple auditory perception, by setting the "Repeat" condition as the baseline. FIGURE 1 | (A) An example of the behavioral task. Children were required to simply repeat the stimulus in the "Repeat" condition, and to repeat a series of morae corresponding to the stimulus in the "Reverse" condition. (B) Time course of the task and near-infrared spectroscopy (NIRS) measurements. The task was divided into three sections: Repeat section (0-40 s: four trials), Reverse section (40-100 s: six trials), and Repeat section (100-160 s). NIRS measurements were performed throughout the three sections, and these data were analyzed with baseline corrections from two baseline data periods to extract hemodynamic activity for phonological processing. The stimuli were presented via the speaker of a laptop computer at about 60 db with a stimulus onset asynchrony (SOA) of 10 s. There were 18 trials in total, and the first four and the last eight trials were performed under the "Repeat" condition ("Repeat" section) while the remaining trials were performed under the "Reverse" condition ("Reverse" section, Figure 1B). Before the task, the participants completed practice trials for the "Repeat" and "Reverse" conditions using two of the stimuli. The other 18 stimuli were presented in random order during the task. Participants were informed of whether a given trial was a "Repeat" or "Reverse" trial by a visual word cue presented on the screen for 2 s before the first trial in each section. Measurements of Behavioral Data Behavioral data were recorded using an IC recorder (HM-200, Sanyo Inc., Osaka, Japan). Response times (RTs) were defined as the duration between the stimuli onset and the end of the response. The offsets of the responses were identified using Sound It! Basic 6.0 for Windows (Internet Co. Ltd. Osaka, Japan). We performed a univariate analysis of variance (ANOVA) with Sheffer's multiple-comparison for groups (Low, Middle, High) and conditions (Repeat, Reverse) in the mean correct RT. The ANOVA was conducted on the number of correct responses for groups. Recordings We recorded changes in the concentration of oxygenated hemoglobin (oxy-Hb) at 16 locations on the forehead with a temporal resolution of 650 ms (OEG-16 Spectratech Inc., Yokohama, Japan). In our system, near-infrared laser diodes (emitter probes) emitted two wavelengths (∼770 and 840 nm) and the reemitted lights were detected on avalanche photodiodes (detector probes) located 3.0 cm apart from each emitter probe (Kita et al., 2011;Tsujimoto et al., 2013). Six emitter and detector probes were arranged in a 6 × 2 matrix. The center point of the bottom of the matrix was placed at Fpz, and the left and right corners of the bottom were located approximately at F7 and F8, respectively, in accordance with the international 10-10 system (Figure 2). Hence, we were able to obtain data from 16 locations between the emitter and detector probes. Filtering The NIRS data were low pass filtered offline at 0.05 Hz using a fast Fourier transform (FFT) to remove artifacts caused by minor movements of the participant (Kita et al., 2011;Makizako et al., 2013) because the NIRS data in the present setting was almost sustained across a block (i.e., 60 s) and was not synchronized to each stimulus (Supplementary Figure S1). We then carried out independent component analysis (ICA) for additional artifact rejection. Independent component analysis has been reported to be helpful in removing artifacts from physiological data [e.g., electroencephalography (EEG), Delorme et al., 2007]. Using ICA, data can be decomposed into statistically independent components that are linearly related to the original data, and after analysis and removal of the artifacts, the original data can be linearly restored from the components. If all artificial components are excluded, the restored data become artifact-free. Recently, ICA has been applied to NIRS data (Kohno et al., 2007;Medvedev et al., 2008). The changes in oxy-Hb measured by NIRS can be contaminated by the skin blood flow (Takahashi et al., 2011). Skin blood flow is easily affected by activity in the automatic nervous FIGURE 2 | Location of emitter (gray) and detector (black) probes and channels. Channel 9 was placed on Fpz (midpoint between Fp1 and Fp2), the probe at the bottom left corner was placed around F7, and the probe at the bottom right corner was placed around F8, in accordance with the international 10-10 system (for more information see Kita et al., 2011;Tsujimoto et al., 2013). We analyzed the averaged oxy-Hb signals from the channels at the left and right frontal areas (R-ROI: 1, 2, 3, 4 ch, blue rhombus, L-ROI: 13, 14, 15, 16 ch, red rhombus). system, in which spatial distribution is not localized because of zone of autonomic innervation (Kohno et al., 2007). Conversely, we considered activity in the frontal area to be localized during phonological manipulation (Kovelman et al., 2012). Hence, we were able to discriminate between the components associated with skin blood flow and cortical activation based on the spatial distribution. In this study, NIRS data were decomposed to 16 components using the FastICA R package (Marchini et al., 2007), and components associated with skin blood flow, i.e., characterized by an overall increase across all channels (Kohno et al., 2007), were excluded by visual inspection. The number of excluded components was 1-3 component(s) in each participant. We used the restored data for further analysis. Analysis We used linear fitting to make baseline corrections based on two baseline intervals: the mean across the 10-s-period before the "Reverse" section, and the mean across the final 10-speriod of the second "Repeat" section ( Figure 1B). To assess the laterality of activity during the "Reverse" section, we selected the averaged values in the channels corresponding to the right-and left-DLPFC as the regions of interest [right region of interest (R-ROI): 1-, 2-, 3-, and 4-ch; left region of interest (L-ROI): 13-, 14-, 15-, and 16-ch, Figure 2]. Moreover, we defined the L-R index as the averaged value of L-ROI minus that of R-ROI to simply assess the degree to laterality. For the averaged ROI values, we conducted a two-way mixed factorial ANOVA with Sheffer's multiple-comparison including groups (Low, Middle, High) and locations (R-ROI, L-ROI). We assumed that developmental trajectory of leftlateralization is different among individuals, and the individual difference influenced the maturation of phonological manipulation. Hence, the hierarchical regression analysis (Cohen et al., 2003) was conducted on mean correct RT including age, L-R index, and an interaction between these variable as predictors. All independent variables were centered on their means. In the first step, age (month) and the L-R index were entered in the model. In the second step, the interaction between age and the L-R index was added. When the interaction was significant, we investigated further using simple slope analyses: slopes for the regression analyses were computed at 1 SD above and below the mean (Aiken and West, 1991). Data processing and statistical analyses were performed with R software (R Development Core Team, 2005) and SPSS 19.0 (SPSS. Japan Inc., Tokyo, Japan). Behavioral Results The mean correct RTs and the number of correct responses in each age group are illustrated in Table 2. Regarding to mean correct RTs, we found significant main effect of groups and conditions, F(2,100) = 7.65, p < 0.001; F(1,100) = 107.34, p < 0.01 and a significant interaction between these variables, F(2,100) = 9.15, p < 0.001. A simple effect analysis revealed that a main effect of groups was found in Behavioral performance with mean scores ± SD. These variables were compared using a one-way analysis of variance (ANOVA) with Bonferroni's multiple-comparisons. FIGURE 3 | Grand averaged waveforms of all channels for the three groups, generated using a 70 s period starting 10 s before the "Reverse" section. Reverse section, where the mean correct RT was longer in the Low compared with the Middle group and we found the shortest RT in the High group (p < 0.05). On the other hand, there was no significant effect of group in Repeat section. In terms of the number of correct response all participants correctly respond for each stimulus in Repeat section (n = 12, i.e., accuracy = 100%). In the Reverse section, ANOVA showed a significant main effect of groups, F(2,100) = 6.51, p = 0.002. Post hoc tests revealed that the number of correct responses was larger in the High and Middle groups than in the Low group (p < 0.05). Figure 3 represents oxy-Hb waveforms of all channels for the 70 s period starting 10 s before the "Reverse" section. Figure 4 shows the oxy-Hb maps and the oxy-HB waveforms at L-ROI and R-ROI. We observed left lateralized activation in the Middle and High groups but not in the Low group. In terms of the waveforms of the ROIs, we found that the increase in oxy-Hb signals was larger at the L-ROI than the R-ROI in the Middle and High groups. NIRS Results The two-way ANOVA revealed a significant interaction between group and location, F(2,100) = 5.26, p = 0.01. We did not find a difference between the oxy-Hb signals in the L-ROI and R-ROI in the Low group, F(1,41) = 2.54, p = 0.12, whereas the signals were significantly larger at the L-ROI than the R-ROI in the Middle and High groups, F(1,34) = 5.88, p = 0.02, F(1,25) = 4.36, p = 0.04 (Figure 5). We found no significant main effects. To examine the possibility that the error-related activity contaminated previous results, we excluded 18 participants (Low: n = 11, Middle: n = 4, High: n = 3) and performed statistical analysis. Consistent with the previous analysis, the two-way ANOVA revealed a significant interaction between group and location [F(2,82) = 4.28, p < 0.05]. We also revealed the significant differences between L-ROI and R-ROI in Middle and High groups (p < 0.05) and no significant difference in Low groups (p = 0.26). In addition, we further performed a two-way mixed factorial ANOVA including accuracy (participants with all corrects, ones with some errors) and locations (R-ROI, L-ROI) in Low groups. There was no significant main effect of accuracy [F(1,40) = 0.02, p = 0.90]. These results indicated that number of error trials did not affect the NIRS results. (B) Grand averaged waveforms for the three groups at each ROI, generated using a 70 s period starting 10 s before the "Reverse" section. FIGURE 5 | Means and SE of oxy-Hb signals during last 10 s of the 'Reverse' section at each ROI. The R-ROI and L-ROI were set to correspond to the right DLPFC and left DLPFC, respectively. * p < 0.05. Hierarchical Multiple Regression Analysis The results of the hierarchical multiple regression analyses are shown in Table 3. The model was significant in the first step, F(2,100) = 11.08, p < 0.001. In the second step, the addition of the interaction between the L-R index and age improved predictive power, F(3,99) = 10.19, p < 0.001. Interestingly, we found a significant interaction between age and the L-R index in the second step (β = −0.24, p = 0.009). Simple slopes analyses revealed that age significantly predicted the mean correct RT at 1 SD above the mean of the L-R index (β=−0.74, p < 0.001) and age did not significantly predict the mean correct RT at 1 SD below the mean of the L-R index (β=−0.18, p = 0.179). These results indicate that children with left-lateralized brain activity tend to show improvements in behavioral performance as they age, while this improvement is less likely in children with non-lateralized activity. Discussion In the present study, we conducted the mora reversal task with native-born Japanese children aged 7-12. We examined developmental shifts in brain activity associated with phonological processing using the NIRS system. This system is well suited for neuroimaging with an auditory-vocal experimental paradigm and a large sample population of children. While younger children made more mistakes and needed more time to sort phonological information in reverse order, older children completed the task quickly and accurately. Neuroimaging data revealed the children in the Middle and High groups to have increased brain activity in the left compared with the right DLPFC during the task, although this was not observed in the children in the Low group. Additionally, we found significant effects of age and lateralized activity on behavioral performance. During the early stage of development, the degree of left lateralization had a smaller effect on behavioral performance. However, the degree of left lateralization had a stronger influence on behavioral performance in the late stage of development. Our findings suggest that a common brain region plays a critical role in the development of phonological processing among different languages systems, including Japanese and alphabetic languages. We found that behavioral performance on the mora reversal task gradually improved with age, as indicated by decreasing RTs and rising accuracy. This indicates that phonological awareness continues to grow until at least age 12 in Japanese speakers. The phonological awareness task is thought to be hierarchical, in that it can measure behavior on multiple levels, from easy to difficult (Treiman and Zukowski, 1991;Yopp and Yopp, 2000). Children in early childhood may only be able to engage in rhyming or identification tasks (the lower tier of the hierarchy), but they gradually grow and acquire skills until they can complete more difficult tasks, such as blending, deletion, and manipulating. The reversal task in the present study is located high in the hierarchy of difficulty because it requires the child to segment phonological information from an auditory stimulus, manipulate the information by reversing the order, and finally blend the information together. Using this higher-level task, we were able to clarify the maturational development of phonological awareness in Japanese children, and use the behavioral evidence to examine developmental shifts in brain activity underlying phonological processing. We found that developmental changes in brain activity, as indicated by activation during the mora reversal task, became more left-lateralized as the development of phonological awareness increased. Previous studies have reported developmental shifts in brain activity for cognitive tasks like verbal fluency, specifically, the distributions range from diffuse to focal as children mature (Gaillard et al., 2000;Durston et al., 2006). The maturation of cognitive ability appears to be characterized by a diminishing of irrelevant brain activity while relevant activity remains. The left DLPFC, the focal region identified in the present study, plays a pivotal role in phonological awareness from the early stages of the development (Kovelman et al., 2012) and the degree of activity in the left DLPFC has been correlated with phonological processing skills during childhood (Turkeltaub et al., 2003). As these associations are evident only on the left side, we consider left-lateralized brain activity in the present task to reflect a maturational pattern of brain activity underlying the development of phonological awareness in Japanese speakers. It appears that left-lateralization of brain activity affects behavioral performance differently depending on a child's age. Specifically, the influence of left-lateralization is blurred in younger children, while it is clearly apparent in older children. This developmental shift of the influence of leftlateralization is thought to be closely linked to the linguistic characteristics of the Japanese language. In Japanese kana, there is a direct correspondence between phonology and orthography, which means that one character strictly corresponds to one syllable (Wydell and Butterworth, 1999;Seki et al., 2004). This correspondence enables easy back-and-forth transformations between phonemes and graphemes. Thus, it is possible that Japanese speakers can easily access both auditory and visual information even if phonological tasks are conducted with only auditory stimuli. Additionally, some children may have used a mental representation of a kana location table while completing the present task. A kana location table is used for language learning, and consists of kana characters placed in a grid with 5 (vowels) by 10 (consonants). It is easy to locate the kana characters on the table, and auditory information can be transformed into visual information rapidly and accurately. Young Japanese children are especially familiar with the kana location table because it is often used for acquisition of kana in early elementary grades. Seki et al. (2004) reported that some Japanese participants used the table during a phonological task. It is possible that with the advantages of direct correspondence and the location table, some participants may have been able to perform the present task using both auditory (i.e., phoneme) and visuospatial information (i.e., grapheme) despite being unable to complete the task with only phonological information. Previous neuroimaging studies have revealed that the right DLPFC, including the inferior frontal cortex, is involved in visuospatial processing (McCarthy et al., 1996;Owen et al., 1998;Smith and Jonides, 1999). Hoshi et al. (2000) also reported that right DLPFC are active when the normal participants transformed auditory information into visuospatial information in the backward digit span task, which is similar to the present task. In the present study, the younger participants may have used visuospatial in addition to phonological information, and thus exhibited activation in the bilateral DLPFC. In contrast, older participants may have relied only on phonological information to accomplish the task, and thus did activate the right DLPFC only to a lesser extent than left DLPFC. This lateralization appears to reflect a neurobiological change underlying the development of phonological processing skills specific to the Japanese language. In the present study, we used the NIRS system to examine neurobiological characteristics in a population of young children who generally are not comfortable in restrictive environments such as a MRI scanner. Despite this advantage, the NIRS system has some technical limitations. We encountered difficulty when attempting to measure brain activity in deep areas involved in phonological function, such as the basal ganglia (Kita et al., 2013) because the system employs near infrared lights, and is thus suitable for measuring activity in cortex only. In addition, activity at ROIs is not only associated with activity of DLPFC and the inferior frontal cortex. In one study with adults, activities of these regions were evaluated independently (Tupak et al., 2012), whereas we could not discriminate these regions because of head size of children. Since left DLPFC has a pivotal role of the phonological manipulation (Kovelman et al., 2012), we considered that our NIRS results were mainly associated with left DLPFC rather than left IFG. We also focused only on the frontal area, as it is a critical brain region for phonological processing, and did not discuss connectivity between the DLPFC and other areas. These technical limitations could be addressed in future research using NIRS systems with more channels than the one used in our study (e.g., Pu et al., 2013). Another limitation is experimental design. While we introduced cognitive subtraction design for extracting brain activity for phonological manipulation in young children, we cannot entirely exclude the effects of other cognitive functions. The present NIRS data, which were acquired by subtracting the activities on "Repeat" condition from those on "Reverse" condition, might include interaction effects of both conditions (e.g., Friston et al., 1996) and we should interpret the present results carefully. Future research could employ fMRI with careful experimental design such as factorial design, considering both participant age (i.e., young children), and task characteristics (i.e., auditory stimuli and oral response). This may reveal more detail regarding the neurobiological changes underlying phonological processing in Japanese speakers. We could not conclude the relationship between left-lateralized brain activity and the maturation of phonological manipulation because of cross-sectional data. Especially, the present study has difficulty to specify a direction of causal relationships between behavioral performance and lateralized brain activity varied with age. Further study is expected with longitudinal data of Japanese population to confirm the present results. We should also, using sophisticated experimental settings, examine whether or not the children show different brain activity when they do not have any differences of behavioral performance, which helps us specify the direction of the causal relationships. Conclusion We measured hemodynamic activity in a population of Japanese children, and observed a neurobiological change during the course of development. Younger children had increased activity in the bilateral DLPFC, which may reflect immature phonological processing skills. Conversely, older children showed dominant activity in the left DLPFC compared with the right DLPFC, which suggests that they had already acquired phonological processing skills. Thus, it appears that brain activity in the frontal area is lateralized during the development of phonological processing in Japanese speakers. These initial findings are useful for discussions of the neurobiological characteristics of children with developmental dyslexia, who are too young to undergo fMRI studies. We anticipate that our results will lead to a better understanding of dyslexia in Japanese speakers. Funding This work was supported in part by an Intramural Research Grant (22-6; Clinical Research for Diagnostic and Therapeutic Innovations in Developmental Disorders) for Neurological and Psychiatric Disorders of NCNP and a Grant-in-Aid for Young Scientists (B; 26780524 to YK).
Combinatorial properties and characterization of glued semigroups This work focuses on the combinatorial properties of glued semigroups and provides its combinatorial characterization. Some classical results for affine glued semigroups are generalized and some methods to obtain glued semigroups are developed. Introduction Let S = n 1 , . . . , n l be a finitely generated commutative semigroup with zero element such that it is reduced (i.e. S ∩ (−S) = (0)). We suppose that S is cancellative, that is to say, if m+n = m+n , with m, n, n ∈ S, then n = n . With these conditions, we may assume that S is a subsemigroup of a non necessarily torsion-free group. If S is torsion-free, then S is an affine semigroup. From now on, we assume that all the semigroups appearing in this work are finitely generated, commutative and reduced, thus in the sequel we omit these adjectives. Let k be a field and k[X 1 , . . . , X l ] the polynomial ring in l indeterminates. This polynomial ring is obviously an S−graded ring (by assigning the S-degree n i to the indeterminate X i , the S-degree of X α = X α1 1 · · · X α l l is It is well known that the ideal (denoted by I S ) generated by is an S−homogeneous binomial ideal called semigroup ideal (see [6] for details). If S is torsion-free, the ideal obtained defines a toric variety (see [12] and the references therein). By Nakayama's lemma, all minimal generating sets of I S have same cardinality and the S−degrees of its elements are determinated. In [1], [4] and [7] the authors study the minimal generating sets of semigroup ideals by means of the homology of different simplicial complexes (with isomorphic homologies) associated to the semigroup. For any m ∈ S, set we consider the abstract simplicial complex (used in [4] and [7]) on the vertex set C m , where gcd(F ) is the greatest common divisor of the monomials in F. The main aim of this work is to study the semigroups which result from the gluing of other two. This concept was introduced by Rosales in [10], and it is closely related to the ideals that are complete intersections (see [13] and the references therein). A semigroup S minimally generated by A 1 A 2 (with A 1 = {n 1 , . . . , n r } and A 2 = {n r+1 , . . . , n l }) is the gluing of S 1 = A 1 and S 2 = A 2 , if there exists a set of generators, ρ, of I S of the form where ρ 1 , ρ 2 are generating sets of I S1 and I S2 respectively, and X γ − X γ ∈ I S such that the support of γ (supp (γ)) is included in {1, . . . , r} and supp (γ ) ⊂ {r + 1, . . . , l}. Equivalently, S is the gluing of S 1 and S 2 if I S = I S1 + I S2 + X γ − X γ . We call glued semigroups to this kind of semigroups. In Section 1, we define the required mathematical elements in order to generalize to non torsion-free semigroups a classical result concerning affine semigroups (Proposition 2). In Section 2, we examine the non-connected simplicial complexes ∇ m associated to the glued semigroups. By understanding the vertices of the connected components of these complexes, we give a combinatorial characterization of the glued semigroups as well as their glued degrees (Theorem 6). Besides, in Corollary 7 we deduce the conditions under which the ideal of a glued semigroup is uniquely generated. Despite the fact that Theorem 6 and Corollary 7 provide the basis to implement algorithms, they may be, however, no efficient. In this sense, the goal of this section is to provide further knowledge about (glued) semigroups employing combinatorial theory no matter the efficiency of the obtained algorithms. We devote the last part of this work, Section 3, to construct glued semigroups (Corollary 10), complete intersection glued semigroups and affine glued semigroups (Subsection 3.1). We create the affine glued semigroups by solving an integer programming problem. Preliminaries and generalizations about glued semigroups In this section we summarize some notations and definitions, and give a generalization to non-torsion free semigroups of [10,Theorem 1.4]. We say that a binomial in I S is indispensable if it is in all the system of generators of I S (up to a scalar multiple). This kind of binomials were introduced in [9]. This notion comes from the Algebraic Statistics. In [8] the authors characterize the indispensable binomials by using the simplicial complexes ∇ m . Note that if I S is generated by its indispensable binomials, I S is uniquely generated (up to a scalar multiple). Being the notation set as in the introduction, we associate a lattice to the semigroup S: ker S ⊂ Z l , α = (α 1 , . . . , α l ) ∈ ker S if l i=1 α i n i = 0. The property "S is reduced" is equivalent to ker S ∩ N l = (0). Given a system of binomial generators of I S , ker S is generated by a set whose elements are α − β with X α − X β being in the system of binomial generators. We call M(I S ) to a minimal generating set of I S , and M(I S ) m ⊂ M(I S ) to the set of their elements whose S−degree are equal to m ∈ S. Betti(S) is the set of the S−degrees of the elements in M(I S ). S is called a complete intersection semigroup if I S is minimally generated by rank(ker S) elements. Let C(∇ m ) be the number of connected components of a non-connected ∇ m , this means that the cardinality of M(I S ) m is C(∇ m ) − 1 (see Remark 2.6 in [1] and Theorem 3 and Corollary 4 in [7]). Note that the complexes associated to the elements in Betti(S) are non-connected. The relation between M(I S ) and Betti(S) is studied next. Construction 1. ([4, Proposition 1]). For each m ∈ Betti(S), one can construct M(I S ) m by taking C(∇ m ) −1 binomials whose monomials are in different connected components of ∇ m and satisfying that two different binomials have not their corresponding monomials in the same components. This let us construct a minimal generating set of I S in a combinatorial way. Now, we are going to introduce the notations that we use to work with glued semigroups. Let S be minimally 1 generated by A 1 A 2 with A 1 = {a 1 , . . . , a r } and A 2 = {b 1 , . . . , b t }. From now on, we identify the sets A 1 and A 2 with the matrixes We denote by k[A 1 ] and k[A 2 ] to the polinomial rings k[X 1 , . . . , X r ] and k[Y 1 , . . . , Y t ], respectively. We call pure monomials to the monomials with indeterminates only in X 1 , . . . , X r or Y 1 , . . . , Y t . Conversely, we call mixed monomials to the monomials with indeterminates in Xs and Y s. Given S, the gluing of S 1 = A 1 and S 2 = A 2 , we say that In this way, it is clear that if S is a glued semigroup, the lattice ker S has a basis such as where the supports of the elements in L 1 are in {1, . . . , r}, the supports of the elements in L 2 are in {r + 1, . . . , r + t}, Moreover, since S is reduced, one has that L 1 Z ∩ N r+t = L 2 Z ∩ N r+t = (0). We will denote by {ρ 1i } i to the elements in L 1 and by {ρ 2i } i to the elements in L 2 . and dZ are the associated commutative groups of S 1 , S 2 and {d}. Proof. Let's assume that S is the gluing of S 1 and S 2 . In this case, ker S is generated by the set (3). Conversely, we suppose that there exists d ∈ (S 1 ∩S 2 )\{0} such that G(S 1 )∩ G(S 2 ) = dZ. Assuming this, we will prove that I S = I S1 This last polynomial is in I S1 • The case λ < 0 can be solved likewise. Therefore, we conclude that It follows that, given the partition of the system of generators of S, the glued degree is unique. Glued semigroups and combinatorics In this section, we approach the study of simplicial complexes ∇ m associated with glued semigroups. We characterize the glued semigroups by means of the non-connected simplicial complexes. For any m ∈ S, we redefine C m from (1), as and consider the vertex sets and the simplicial complexes where A 1 = {a 1 , . . . , a r } and A 2 = {b 1 , . . . , b t } as in Section 1. Trivially, the relations between ∇ A1 m , ∇ A2 m and ∇ m are The following result shows a relevant property of the simplicial complexes associated to glued semigroups. Lemma 3. Let S be the gluing of S 1 and S 2 , and m ∈ Betti(S). Then all the connected components of ∇ m have at least a pure monomial. In addition, all mixed monomials of ∇ m are in the same connected component. Proof. Supposed that there exists C, a connected component of ∇ m only with mixed monomials. In this case, in any generating set of I S there is, at least, a binomial with a mixed monomial (by Construction 1). But there is not this binomial. This is not possible because S is the gluing of S 1 and S 2 . Since S is a glued semigroup, ker S has a system of generators as the intro- • The case λ < 0 can be solved likewise. In any case, X α Y β and X γ Y δ are in the same connected component of ∇ m . The following Lemma describes the simplicial complexes that correspond to the S−degrees that are multiples of the glued degree. Proof The following Lemma is a combinatorial version of [5,Lemma 9] and it is a necessary condition for our combinatorial characterization theorem (Theorem 6). Lemma 5. Let S be the gluing of S 1 and S 2 , and d ∈ S the glued degree. Then the elements in C d are pure monomials and d ∈ Betti(S). Proof. The reader can check that m S m if m − m ∈ S, is a well defined partial order on S. Let's assume that there exists a mixed monomial T ∈ C d . By Lemma 3, there exists a pure monomial in But this is not possible due to the fact that d ≺ S d. Consequently, one can consider that T 1 is a mixed monomial and C A1 d = ∅, but C A2 d is not empty. If there exists a pure monomial in C A2 d connected to a mixed monomial in C d , the above process can be repeated until T 2 , Y b2 ∈ C d are obtained, with T 2 a mixed monomial. This process is finite by degree reasons. So, after some steps, one find a d (i) ∈ S such that ∇ d (i) is not connected (i.e. d (i) ∈ Betti(S)) and it has a connected component whose vertices are only mixed monomials. This contradicts Lemma 3. After examining the structure of the simplicial complexes associated to the glued semigroups, we enunciate a combinatorial characterization theorem by means of the non-connected simplicial complexes ∇ m . This result helps to understand the nature of glued semigroups and increase our knowledge on them. For all Besides, the above d ∈ Betti(S) is the glued degree. Proof. If S is the gluing of S 1 and S 2 , we obtain immediately the theorem from Lemmas 3, 4 and 5. Conversely, given d ∈ Betti(S) \ {d}, by the hypothesis 1 and 3, we can construct the sets, M(I S1 ) d and M(I S2 ) d , in a similar way to Construction 1, but only taking binomials whose monomials are in C A1 d or C A2 d . Analogously, if we consider d ∈ Betti(S), we construct the set M(I S ) d with C(∇ d ) − 1 binomials as the union of: We conclude that m∈Betti(S) generating set of I S . So S is the gluing of S 1 and S 2 . From Theorem 6 we obtain an equivalent property to that in [5, Theorem 12] by using the language of monomials and binomials. Corollary 7. Let S be the gluing of S 1 and S 2 , and X γ X − Y γ Y ∈ I S a glued binomial with S−degree d. Then, I S is (minimally) generated by its indispensable binomials if and only if: • I S1 and I S2 are (minimally) generated by their indispensable binomials. • For all d ∈ Betti(S), the elements of C d are pure monomials. Proof. Suppose that I S is generated by its indispensable binomials. By [8,Corollary 6], ∀m ∈ Betti(S), ∇ m has only two vertices. In particular, by d or ∇ A2 d (by Lemma 1). In any case, X γ X − Y γ Y ∈ I S is an indispensable binomial, and I S1 , I S2 are generated by their indispensable binomials. Conversely, we suppose that I S is not generated by its indispensable binomials. So, ∃d ∈ Betti(S) \ {d} such that ∇ d has more than two vertices in at least two different connected components. Taking into account our hypothesis, there are not mixed monomials in ∇ d and so: , then I S1 (or I S2 ) is not generated by its indispensable binomials. • In other case, C A1 d = ∅ = C A2 d , by Lemma 4, d = jd, with j ∈ N, and so X (j−1)γ X Y γ Y ∈ C d , which contradicts our hypothesis. Thus, we conclude that I S is generated by its indispensable binomials. We illustrate the above results with the following example taken from [13]. From Corollary 7, I S is not generated by its indispensable binomials (I S has only four indispensable binomials). Generating glued semigroups In this section, we give an algorithm with the aim of producing many examples of glued semigroups. Furthermore, we construct affine glued semigroups by means of solving an integer programming problem. First of all, we consider two semigroups T 1 and T 2 . Keeping the same notation we have followed throughout the whole article, let A 1 = {a 1 , . . . , a r } and A 2 = {b 1 , . . . , b t } be two minimal generator sets of the semigroups T 1 = A 1 and T 2 = A 2 , and L j = {ρ ji } i be a basis of ker T j with j = 1, 2. Let γ X and γ Y be two nonzero elements in N r and N t respectively 2 , and consider the integer matrix Let S be a semigroup such that ker S is the lattice generated by the rows of the matrix A. This semigroup can be computed by using the Smith Normal Form (see [11,Chapter 2]). Denote by B 1 , B 2 to two sets of cardinality r and t respectively, satisfying S = B 1 , B 2 and ker( B 1 , B 2 ) is generated by the rows of A. The following Proposition shows that the semigroup S satisfies one of the conditions to be a glued semigroup. Proof. Likewise in the proof of the necessary condition of Proposition 2, since we only used that ker S has a basis as (3). This condition is not enough for S to be a glued semigroup, because the generating set B 1 ∪ B 2 could be non-minimal. For example, if one get the numerical semigroups T 1 = 3, 5 , T 2 = 2, 7 and (γ X , γ Y ) = (1, 0, 2, 0), one have the matrix as (5)  Next corollary is devoted to solve this issue. 2 Note that γ X / ∈ ker T 1 and γ Y / ∈ ker T 2 because these semigroups are reduced. • If λγ X1 = 0, T 1 is not minimally generated, but it is not possible by hypothesis. We have just proved that γ X = (1, 0, . . . , 0). In the general case, if S is not minimally generated it is because either γ X or γ Y are elements in the canonical bases of N r or N t , respectively. To avoid this situation, it is sufficient to take γ X and γ Y satisfying From the above result we obtain a characterization of the glued semigroups: S is a glued semigroup if and only if ker S has a basis as (3) satisfying the condition (6). We compute their associated lattices This semigroup verifies that ker S is generated by the rows of the above matrix, and it is the gluing of the semigroups B 1 and B 2 . The ideal I S ⊂ C[x 1 , . . . , x 4 , y 1 , . . . , y 3 ] is generated 3 by then S is really a glued semigroup. In this way, we provide a procedure that allows us the construction of (glued) semigroups that are complete intersections. Regarding the following Lemma, it is sufficient that the semigroups T 1 and T 2 are complete intersections in order to be S as well. Next we give an algorithm to generate many examples of complete intersection semigroups. Lemma 12. T 1 and T 2 are two complete intersection semigroups if and only if S is complete intersection semigroup. Generating affine glued semigroups As one can check in Example 11, the semigroup S is not necessarily torsionfree. In general, a semigroup T is affine, i.e. it is torsion-free, if and only if the invariant factors 4 of the matrix whose rows are a basis of ker T are equal to one. We suppose that if the Smith Normal Form, D, of a matrix has some zero-columns, which are on the right side of D. We use this fact to give conditions for S being torsion-free. Let P 1 , P 2 , Q 1 and Q 2 some matrices with determinant ±1 (i.e. unimodular matrices) such that D 1 = P 1 L 1 Q 1 and D 2 = P 2 L 2 Q 2 are the Smith Normal Form of L 1 and L 2 , respectively. If T 1 and T 2 are two affine semigroups, the invariant factors of L 1 and L 2 are equal to 1. Then where γ X = γ X Q 1 and γ Y = −γ Y Q 2 . Let s 1 and s 2 be the numbers of zerocolumns of D 1 and D 2 (s 1 , s 2 > 0 because T 1 and T 2 are reduced). Lemma 13. S is an affine semigroup if and only if Proof. With the conditions imposed to T 1 , T 2 and (γ X , γ Y ), gcd {γ Xi } r i=r−s1 ∪ {γ Y i } t i=t−s2 = 1 is a necessary and sufficient condition for the invariant factors of A to be all equal to one. In the following Corollary we give the explicit conditions that γ X and γ Y must fulfill to construct an affine semigroup. 1. T 1 and T 2 are two affine semigroups. Proof. Trivial by construction, Corollary 10 and Lemma 13. Therefore, to construct an affine glued semigroup, it is enough to take two affine semigroups, and any solution, (γ X , γ Y ), of the equations of the above corollary. Example 15. Let T 1 and T 2 be the semigroups of Example 11.
Assessment of the toxicity and biodegradation of amino acid-based ionic liquids Amino acid-based ionic liquids (AAILs) are generally thought of as green solvents and widely used in many regions without systematic assessment of their effect on the environment or human health. In this work, a series of AAILs with different cations and amino acid anions were prepared and characterized, after which their microbial toxicity, phytotoxicity, and biodegradability were evaluated. The results showed that not all AAILs had low toxicity against microorganisms and that some AAILs were highly toxic towards the targeted microorganisms. The phytotoxic effect of the AAILs on rice (Oryza sativa L.) further demonstrated that AAILs should not be presumed to be non-toxic to plants. Moreover, the biodegradability tests showed that majority of AAILs were not satisfactorily biodegradable. In summary, not all AAILs are non-toxic or biodegradable, and their effect on the environment and human health must be assessed before their mass preparation and application. Introduction Ionic Liquids (ILs) are dened as molten salts based on cations and anions with melting points around or below 100 C. Compared with traditional molecular solvents, ILs possess unique physicochemical properties, such as high thermal stability, low vapor pressure, excellent dissolution ability, high ion conductivity, a wide electrochemical window, and strong catalytic performance. [1][2][3][4] Owing to these attractive properties, ILs are considered to be "green" solvents and are widely used in catalysis, 5 electrochemistry, 6 extraction and separation, 7 CO 2 capture, 8 SO 2 capture, 9 lignocellulosic biomass pretreatment, 10 pharmaceutics and medicine. 11 Therefore, it is unavoidable that ILs have been released into the environment. Although ILs possess negligible volatility which can prevent them from entering the atmosphere, they dissolve easily in water and can then enter the aquatic environment. Unfortunately, not enough attention was paid to the effect of IL exposure to the environment when they were rst introduced. More recently, signicant efforts have been made to assess IL (eco)toxicity and biodegradability in order to avoid potential risks. [12][13][14][15] ILs toxicity has been assessed using diverse biological models, such as bacteria, 15 fungi, 16 invertebrates, 17 algae, 18 sh, 19 plants, 20 and cells. 21 ILs biodegradability has been evaluated using standardized assays approved by the Organization for Economic Cooperation and Development (OECD) and the International Organization for Standardization (ISO), such as the dissolved organic carbon die-away test (OECD 301A), closed-bottle test (OECD 301D), and CO 2 headspace test (ISO 14593). 22 Most studies proved that the assumption that ILs are "green solvents" is incorrect. Moreover, researchers found that some ILs did not conform to the principle of "green chemistry" since they exhibited higher toxicity than the common volatile organic compounds such as methanol and dichloromethane and had low biodegradability. 23 In response, ILs containing cations and anions from natural sources, such as choline, 24 mandelic acid, 25 betaine, 26 sugars, 27 and amino acids 28 have attracted increasing attention on the theory that they would be less toxic and more biodegradable. Natural amino acids can be converted into both cations and anions for the preparation of amino acid-based ionic liquids (AAILs). 23,29 The most widely used AAILs contain amino acid anions, because these are simple to be produced using an acidbase neutralization reaction. 23 AAILs possess not only the unique physiochemical properties of traditional ILs, but also show lower toxicity and better biodegradability. 30 Moreover, amino acids are abundant and cheap biomolecules, which reduces the cost of IL synthesis. 30 Today, AAILs are considered to be "environmentally-friendly" solvents that can be used for diverse applications, including biomass pretreatment, 31 as catalysts in organic reactions, 32 and CO 2 capture. 33 The (eco)toxicity and biodegradation of some AAILs towards microorganisms, green algae, and cells have been evaluated in recent years, 34,35 and the majority of AAILs were found to have lower toxicity than the original ILs. However, if all AAILs are to be labeled "environmentally-friendly", then further assessments are needed. The toxicity and biodegradability of AAILs made up of amino acid anions and different cations, such as cholinium, imidazolium, pyrrolidinium, piperidinium, quaternary ammonium, and quaternary phosphonium, may be different. For example, Hou et al. 36 and assessed their effects on enzymes, bacteria and biodegradability. They found that [Cho][AA] ILs had low toxicity and good biodegradability due to the choline cation. In contrast, when the cytotoxicity of AAILs composed of alkylimidazolium cations and amino acids towards CaCo-2 and NIH/3T3 cells was assessed, the researchers found that the AAILs showed biologically unsafe behavior and that their cytotoxicity was similar to that of the conventional ILs 1-butyl-3-methylimidazolium (L)-lactate chloride and 1-butyl-3-methylimidazolium chloride. 30 Although some studies on the toxicity or biodegradability of AAILs have been published, 13 (Fig. 1), and assessed for biodegradability and ecotoxicity towards microorganisms and higher plants. This study aimed to supplement the toxicity data of AAILs and to nd "environmentally-friendly" AAILs for use in further studies. Preparation of AAILs Nineteen AAILs were synthesized in this work. Twelve AAILs were prepared using the "two-step" method, 37 The other seven AAILs were prepared using the "one-step" method, 31 Bacteria toxicity test The toxic effect of each AAIL on three bacteria (E. coli, B. subtilis, and R. solanacearum) was assessed using the well diffusion method, as described by Ventura et al. 38 Briey, three target bacteria were cultured in nutrient broth at 37 C for 12 h, and then 10 6 colony forming units (CFU) per cm 3 of the suspension was diluted with nutrient broth. A 1 mL bacterial inoculum was then uniformly spread onto an agar surface. Four holes with 5.0 mm diameters were punched into the agar using a sterile borer under aseptic conditions, and 50 mL AAILs solution at the desired concentration was placed into the four holes. The agar plates were incubated at 37 C for 24 h, aer which the diameters of the growth inhibition zones were measured. Microbial toxicity assays The toxic effect of AAILs on microorganisms was evaluated using the tube dilution method. The tested microorganisms included three bacteria (E. coli, B. subtilis, and R. solanacearum), four fungi (C. subvermispora, F. lignosus, P. chrysosporium, and T. sanguinea), and two yeasts (S. cerevisiae and S. stipitis). The bacteria were cultured in Mueller-Hinton broth medium at 37 C for 24 h, while the fungi and yeasts were cultured in Sabouraud agar medium at 28 C for 48 h. A suspension of 10 6 CFU cm À3 microorganisms was prepared from each culture. A series of AAILs solutions (7.8-1000 mmol L À1 ) were then prepared with Mueller-Hinton broth (bacteria) or Sabouraud broth (fungi and yeasts), and sterilized via ltration (0.45 mm pore-diameter membrane) under sterile conditions. AAILs solution (100 mL) and microorganism suspensions (100 mL) were introduced into individual wells of a 96-well plate. A well with culture but no microorganism was used as negative control, while another well with a microorganism but no AAILs solution was used as a positive control. Microorganism growth was visually determined aer incubation at 37 C for 24 h (bacteria) or at 28 C for 48 h (fungi and yeasts). The lowest concentration at which there was no visible growth (turbidity) was considered as the minimal inhibitory concentration (MIC). Samples of 20 mL from each well were spread onto an agar medium with inactivates (0.3% lecithin, 3% polysorbate-80, and 0.1% Lcysteine), then incubated at 37 C for 48 h. The lowest concentration of the IL that killed 99.9% or more of the test microorganism was considered as the minimum biocidal concentration (MBC). Phytotoxicity tests The toxic effect of AAILs on rice seed germination was assessed using seeding emergence and seeding growth tests (OECD/OCED 208/2006). Prior to germination, rice seeds were sterilized in 2% H 2 O 2 (v/v) for 10 min and then washed three times with distilled water. For each AAIL solution at each concentration (200, 400, 600, 800, and 1000 mg kg À1 ), seeds were soaked in 20 mL AAIL solution in the dark at 30 AE 1 C for 12 h. Next, 20 seeds were placed in a Petri dish (diameter 90 mm) on two pieces of lter paper moistened with 10 mL of the appropriate AAIL solution. One sample of rice seeds was treated with distilled water as a control, and all treatments were replicated three times. Rice seedlings were grown under controlled conditions at 28 AE 1 C with 80% humidity and a shi cycle of 14 h per day and 10 h per night. An equal amount of each AAIL solution was added to the appropriate plate every day to ensure sufficient moisture. Aer 10 days, the rice seeds were harvested, and their shoot height, root length, and fresh weight were measured, and the results were calculated by eqn (1)-(3). Shoot height, root length, and fresh weight of samples without treatment of AAILs were used as the control (CK). Higher concentration AAILs were not used to treat the seeds since the inhibition caused in some cases was strong enough to prevent any plant from growing. Biodegradation test The biodegradability of the AAILs was assessed according to guideline 301D of the Organization for Economic Cooperation and Development (OECD). Briey, a mineral medium was prepared consisting of 8.5 g L À1 KH 2 PO 4 , 21.75 g L À1 K 2 HPO 4 , 33.40 g L À1 Na 2 HPO 4 $2H 2 O, 0.5 g L À1 NH 4 Cl, 27.5 g L À1 CaCl 2 , 22.50 g L À1 MgSO 4 $7H 2 O, and 0.25 g L À1 FeCl 3 $6H 2 O. AAILs solution of 4 mg L À1 was prepared in the mineral medium and incubated with aerated water from Taozi Lake (Changsha, China). A control with inoculums but without AAIL was used as a blank, and sodium benzoate was used as a reference substance. These test solutions were placed in closed bottles and were kept in the darkness at 25 AE 1 C for 28 days. The biological oxygen demand (BOD) of the samples was determined using a dissolved oxygen meter (Firesting O 2 , Pyroscience Germany) every 7 days. AAIL biodegradation rates were calculated by dividing the BOD by the theoretical chemical oxygen demand (TCOD). Statistical analysis The proton nuclear magnetic resonance ( 1 H NMR) data of AAIL samples were analyzed using MestReNova LITE 9.0.1, and Fourier transform infrared spectroscopy (FT-IR) data were analyzed by OriginLab 8.0 (OriginLab Corporation, American). Toxic and biodegradable data were analyzed using the graphics program GraphPad Prism v5.0b (GraphPad Soware Inc.). All data are presented as the mean of three independent experiments with error values expressed as the standard error of the mean (SEM). Antibacterial activity The well diffusion method is widely used to screen chemicals for toxicity due to its simplicity, where the toxicity of a chemical is determined by the diameter of the growth inhibition zones. 39 In this study, the effects of the AAILs were evaluated via well diffusion assays with target bacteria. Tables S1-3 † show the results of all tested AAILs against the bacteria B. subtilis, R. solanacearum, and E. coli. The growth inhibition halos (cm) for the highest AAIL concentration (1.00 mol L À1 ) tested against the three target bacteria are shown in Fig. 2. Most AAILs at the lowest concentration (0.0625 mol L À1 ) were found to be nontoxic (no inhibition zone) against the three target bacteria, while some AAILs demonstrated low toxicity towards the target bacteria even at the highest concentration (1.00 mol L À1 (Table S2 †). Moreover, the results showed that the AAILs' toxic effect on target bacteria is different, and the toxicity of AAILs is greatly related to the kinds of cation core, alkyl chain length and anions. 38 The effect of AAILs containing cations with alkyl imidazoliums (carbon atom ¼ 2, 4, or 6) or alkyl ammonia (carbon atom ¼ 2 or 4) against the three bacteria were also evaluated in this study (Fig. 2a). The diameters of the inhibition zones of B. subtilis exposed to [ [Cys] solution (0.025 mol L À1 ) were 0.00 and 0.93 AE 0.06 cm, respectively (Table S1 †). The toxic effect of these AAILs on other target bacteria was comparable to their effect on B. subtilis. This phenomenon is known as "chain length effect" (toxicity increases with an increase in alkyl chain length), 40 and occurs because cations with long alkyl chain are more hydrophobic and more easily pass through the membranes of microorganisms than short alkyl chains. 41 Thus, AAILs with longer alkyl chain are more destructive on microorganisms. This result shows that alkyl chain length plays an important role in AAILs' antibacterial activity. 38 As shown in Fig. 2b, 42 These results were in accord with that reported by Gouveia, they prepared 14 AAILs with different cations (imidazolium, pyridinium and cholinium) and anions (arginine, glutamine, glutamic acid and cystine) and studied their toxicity to bacteria and cells. 37 However, head group with cyclic structure is more destructive on target bacteria, and because cyclic structure is too stable to be damaged. In addition, ILs containing [N 4,4,4,4 ] + have a good antimicrobial activity as well. These results suggest that the head group makes a major contribution to AAIL toxicity. Fig. 2 The growth inhibition halo (cm) for the highest tested concentration of AAILs (1.00 mol L À1 ) against three target bacteria B. subtilis, R. solanacearum, and E. coli. The diameter of the inhibition zone of R. solanacearum exposed to [Cho][Asp] or [C 4 mim][Asp] solution was zero even at a high concentration (0.50 mol L À1 ) (Table S2 †). Moreover, the [Cho][Asp] concentration at 1.00 mol L À1 was also less toxic against B. subtilis and E. coli (Fig. 2d). However, the toxicities of other AAILs with the same cation and different anions were distinctly different because toxicity is also related to physicochemical properties of the anion, such as hydrophilicity or hydrophobicity. To the best of our knowledge, [C 4 mim][Asp] possesses one hydrophilic carboxyl groups, which can't easily pass through the membrane of microorganisms. 43 For this reason, the effect of [C 4 mim][Asp] on target bacteria was lower than that of other AAILs with a [C 4 mim] cation. In addition, the stability of anions is also an important factor affecting ILs toxicity, for example, the toxic effect of AAILs with phenylalaninate anions on target bacteria is stronger than that of other anions. 12 These results show that the anions species also play a role in the toxicity of AAILs. The diameters of the inhibition zones of R. solanacearum, E. coli, and B. subtilis treated with the highest concentration (1.00 mol L À1 ) of [Cho][Pro] solution were 0.80, 0.70, and 1.03 AE 0.06 cm, respectively (Fig. 2d). Gram-positive bacteria possess thicker and more hydrophobic cell walls, 44 and a much higher peptidoglycan content ($90%). The cell walls of Gram-negative bacteria are chemically more complex, which have an additional outer membrane mostly composed of lipopolysaccharides. 45 The latter is oen related to the higher resistance of Gram-negative bacteria to biocides. 46 The Gram-negative bacteria had higher tolerances than the Gram-positive bacteria, which can be attributed to the fact that Gram-negative bacteria possess a second cell membrane which acts as an additional barrier. These results agree well with the ndings reported by Mester et al. 43 Antimicrobial activity Although the well diffusion method mentioned above can easily verify if the AAILs have toxicity to microorganisms, it can't quanticationally assess the toxicity. 39 Therefore, antimicrobial activity of the nineteen AAILs were also estimated using the tube dilution method, this time with nine microorganisms: three bacteria (E. coli, B. subtilis, and R. solanacearum), four fungi (C. subvermispora, F. lignosus, P. chrysosporium, and T. sanguinea), and two yeasts (S. cerevisiae and S. stipitis). The tube dilution test is usually used to determine the minimum inhibitory concentration (MIC) and the minimum bactericidal/fungicidal concentration (MBC/MFC), which are two essential parameters in antimicrobial susceptibility testing. The MIC and MBC values (in mM) for the AAILs are shown in Table S4. † According to the reports by Gathergood, the toxicity of ILs was divided into three levels, and which was indicated as three colors, green, amber, and red. The green color represents the MIC/MBC value > 2 mM, the amber color represents the MIC/MBC value between 0.25 and 2.0 mM, and the red color represents the MIC/MBC value < 0.25 mM. 47 Fortunately, all the tested AAILs in this study had the MIC/MBC value > 2 mM, which indicated AAILs were much safer from this aspect. Even so, the AAILs with different structures had different inhibiting effect to the tested microorganisms. It was found that the AAILs that incorporated the choline cation showed lower toxicity than other tested compounds towards target microorganisms. ). This behavior is described as a "cut off effect" where there is a maximal effect above which an increase in alkyl chain length does not produce an increase in toxicity. 40,48 Differences in the antimicrobial activities of AAILs containing an L-cystine anion might be attributable to differences in the head group. For instance, the presence of a heterocyclic imidazole group in [C 4 were 125, 500, 250 mmol L À1 , respectively. The target B. subtilis is a Gram-positive bacterium, which has a thick and multilayered cell wall made of peptidoglycan surrounding the cytoplasmic membrane, while Gram-negative bacteria have a cell wall made up of a cytoplasmic membrane, peptidoglycan layer, and an external cytoplasmic membrane. 50 The presence of a more structurally complex cell wall gave the Gram-negative bacteria a greater tolerance to AAILs than the Gram-positive bacteria. Similarly, the MBC values of [C 4 mim][Cys] when targeting the two yeasts were lower than those when targeting bacteria or fungi, indicating that yeasts were the microorganism most sensitive type to AAIL toxicity. While yeasts also possess a protective capsule, this capsule must be easily destroyed by AAILs. 51 Phytotoxicity tests The toxicity of the investigated AAILs at various concentrations against rice seed germination is shown in Tables S5-7. † The results of these tests showed that AAILs at low concentrations possessed a low toxic effect on seed germination. For instance, root inhibition aer exposure to 200 mg kg À1 [C 4 mim][Phe] was the lowest (À3.91%) among all tested concentrations. As the AAIL concentrations increased, the adverse effect of AAILs on seed growth signicantly increased. However, some AAILs promoted the growth of rice seeds under both low and high test concentrations. For example, the root length of rice treated with 200 mg kg À1 of [Cho][Asp] for 10 days was eight times greater than that of rice without treatment. This pattern held true even aer exposure of rice seeds to the highest concentration of AAIL (1000 mg kg À1 ). And thus, [Cho][Asp] can be expected to be used as the growth accelerator for rice. The mechanism of these results is thought to be due to the attractive force between the cations and anions that make up the ILs. This attraction is stronger at high concentrations and weakens at low concentrations. 52 In this case, the [Cho][AA] was ionized to cholinium cations and amino acid anions, which can promote the plant growth. Similar results have been reported for wheat seedlings, 53 exposure to the low concentrations of the IL 1-butyl-3methylimidazolium tetrauoroborate promoted seedling growth. The difference in the effects of [Cho][AA] on seed germination might be attributed to the structures of anions, as can be seen in the data presented in Table S5. † For instance, [Cho][Asp] had a positive impact on the growth of rice seeds at all tested concentrations, which suggests that maybe the introduction of a carbonyl acid group signicantly reduced the toxic effect of AAILs on rice. In previous reports cations have been shown to play an important role in the toxicity of ILs. [54][55][56] Therefore, in this study, AAILs composed of various cations combined with an L-cysteine anion were examined to determine their toxicity in rice. It can be seen in mim] + z [N n,n,n,n ] + . This order is also identical with the result in antimicrobial activity mentioned above and other reports. 12 In contrast, the observed effect of the AAILs on root length, shoot length, and fresh weight of rice showed that the introduction of an alkyl side chain in cations affected AAIL toxicity. Specically, shoot inhibitions of [C 2 mim][Cys], [C 4 mim][Cys] , and [C 6 mim][Cys] were À4.96%, À22.71%, and 97.47%, respectively (Table S6 †). As the data show, the toxic effect of AAILs on rice increased with the elongation of the alkyl chain, which may be attributed to the increase of lipophilicity as the alkyl chain increased. 14 Compounds with more pronounced lipophilic characters can interact more easily with the hydrophobic domains of membrane proteins and with the phospholipid bilayers that make up the cell membrane. This interaction may disrupt physiological function in the cell membrane and increase AAIL toxicity. 57 Similarly, [N 4,4,4,4 ][Cys] showed greater toxicity than [N 2,2,2,2 ][Cys] (Table S6 †). This phenomenon has also been reported in an assessment of IL phytotoxicity toward rice seedlings using imidazolium chloride. 58 The effect of anions on ILs toxicity can't be neglected either. It can be seen in Tables . These results also approved that the anions can also affect the toxicity of AAILs. This behavior contradicts previous studies which reported that anions had only a small impact in reducing IL toxicity. The difference might be attributable to the introduction of choline. 59 Biodegradability In contrast to chemical degradation which requires the assistance of an oxidant for catalysis, biodegradation is the breakdown of chemical compounds by microbes. 60 In general, the biodegradation test methods are considered to be valid if the reference compound of >60% is biodegraded within 14 days. In this study, the biodegradability of sodium benzoate was measured as 61.2% in 14 days, which proved the validity of this test method. As can be seen in Fig. 3 , which allows them to be classied as "readily biodegradable" according to OECD standards. According to the reports by Gathergood, the degradation rate was divided into three levels, degradation rate > 60% represented by green color, degradation rate between 0-59% represented by amber color, and degradation rate < 20% represented by red color. 47 [Cho] [AA] was prepared from renewable biomaterials, which are unstable in the environment and are easily decomposed by microbes. 36 Moreover, [Cho][AA] is an attractive compound for use in industry due to its low microbial toxicity, as described in section Antibacterial activity and Antimicrobial activity. Compared with the results reported by Hou, 36 the biodegradability of AAILs in this study was relatively lower, the reason can be that the inoculum was lake water and the OECD 301D method was chosen as the test method according to guideline 301D of the Organization for Economic Cooperation and Development (OECD (Fig. 3), which could be attributed to the presence of the stable hetero cyclic imidazole group. 14 Similarly, Gathergood et al. found that imidazolium ILs were classied as red (poor biodegradability) using the CO 2 headspace test (ISO 14593), and they suggested that imidazolium salts containing a heterocyclic ring were difficult for microorganisms to degrade. 60 The relatively low biodegradability of [Pyr] [Cys] and [Pip][Cys] might therefore be attributed to the presence of a N-containing heterocyclic group, which may be more resistant to microbe attack. Additionally, the microbial toxicity of AAILs containing quaternary ammonium were high, as described in section Antimicrobial activity, which should decrease inoculum density in closed bottle, leading to the poor biodegradation of [N 2,2,2,2 ][Cys] (17.7 AE 2.9%). 14 The biodegradability of AAILs improved slightly as the side chain length of cations increased. For instance, [N 4,4,4,4 ][Cys] possesses a long alkyl chain and showed higher aerobic 37 It can also been concluded in Fig. 3 that the biodegradability of AAILs with cysteine anion followed the order: [Cho] + > [Pyr] + > [Pip] + > [N 4,4,4,4 ] + z [C 4 mim] + > [N 2,2,2,2 ] + > [C 6 mim] + > [C 2 mim] + . This order is contrary to the toxicity, which indicated that the larger toxicity, the lower biodegradability. The results also approved that the cations of AAILs can affect the biodegradability. As to the anions of amide acids, it can be divided into aliphatic and cyclic amide acids. For example, the biodegradability of [C 4 Conclusions In this study, the antimicrobial activity, phytotoxicity, and biodegradability of 19 AAILs with different cations and anions were evaluated. The main factors affecting the toxicity and biodegradability of AAILs, including the type of cations (head groups and side chain) and anions (hydrophobic, hydrophilic or stability), were examined. The results showed that AAIL toxicity was mainly depended on the structures of the cations and anions. Toxicity increased with the elongation of alkyl chain length and decreased with the introduction of hydrophilic groups. The presence of a long alkyl chain in the cation and of hydrophilic groups in the anions can improve AAILs' biodegradability. Although the introduction of amino acids as anions can improve toxicity and biodegradation of AAILs, the (eco) toxicity and (bio)degradation of some AAILs can't be ignored neither. The results showed that not all AAILs' microbial toxicity was expected to be low and that some AAILs have high toxicity toward target microorganisms. Moreover, the phytotoxic effect of AAILs on Oryza sativa L. further demonstrated that the effect of AAILs on plants should not be ignored. Some AAILs, like [Cho][Asp] can be expected to be used as growth accelerator. The biodegradability tests showed that the majority AAILs had dissatisfactory biodegradability (#60%). Therefore, it can be concluded that since not all AAILs have low toxicity and good biodegradation, it is necessary to assess the effect of AAILs on the environment and human health before their use in applications for the mass-market. Conflicts of interest The authors declare no conicts of interest.
Thermalisation and hard X-ray bremsstrahlung efficiency of self-interacting solar flare fast electrons Most theoretical descriptions of the production of solar flare bremsstrahlung radiation assume the collision of dilute accelerated particles with a cold, dense target plasma, neglecting interactions of the fast particles with each other. This is inadequate for situations where collisions with this background plasma are not completely dominant, as may be the case in, for example, low-density coronal sources. We aim to formulate a model of a self-interacting, entirely fast electron population in the absence of a dense background plasma, to investigate its implications for observed bremsstrahlung spectra and the flare energy budget. We derive approximate expressions for the time-dependent distribution function of the fast electrons using a Fokker-Planck approach. We use these expressions to generate synthetic bremsstrahlung X-ray spectra as would be seen from a corresponding coronal source. We find that our model qualitatively reproduces the observed behaviour of some flares. As the flare progresses, the model's initial power-law spectrum is joined by a lower energy, thermal component. The power-law component diminishes, and the growing thermal component proceeds to dominate the total emission over timescales consistent with flare observations. The power-law exhibits progressive spectral hardening, as is seen in some flare coronal sources. We also find that our model requires a factor of 7 - 10 fewer accelerated electrons than the cold, thick target model to generate an equivalent hard X-ray flux. This model forms the basis of a treatment of self-interactions among flare fast electrons, a process which affords a more efficient means to produce bremsstrahlung photons and so may reduce the efficiency requirements placed on the particle acceleration mechanism. It also provides a useful description of the thermalisation of fast electrons in coronal sources. Introduction Most studies of the behaviour of fast electron populations in solar flares have assumed the presence of an ambient cold background plasma of sufficient density that it dominates the evolution of the fast electrons. In this case, self-interactions between the members of the fast electron population may be neglected, and only interactions between the fast electrons and members of the background plasma are considered. Analytic treatments of this kind include Brown's (1971) original formulation of the cold, thick target problem, and other analytic approaches such as those of Vilmer et al. (1986), which considers the spatial and temporal evolution of fast electrons in a region of inhomogeneous magnetic field and plasma density, and Conway et al. (1998), which gives analytic expressions for the moments of the electron distribution function. Galloway et al. (2005) detailed a treatment which also assumed a dominant background plasma, but which relaxed the traditional cold target assumption. However, they also revealed a situation where observations implied that the fast electron population might not be insignificant in comparison to the ambient Send offprint requests to: A. L. MacKinnon ⋆ Present address: School of Physics and Astronomy, University of Edinburgh, James Clerk Maxwell Building, EDINBURGH EH9 3JZ, UK. plasma, i.e. the local density of the fast electrons is not small compared to the density of the background plasma electrons. Krucker et al. (2009) have reported a coronal hard X-ray (HXR) source in which the fast particle density approaches the plasma density. In this case, it would be desirable to consider also the self-interactions between the members of the fast electron population. As well as being an interesting modelling problem, such an approach will have some bearing on a central problem in flare physics. The existing cold, thick target flare paradigm envisages the flare hard X-ray yield being generated principally by interactions between fast electrons and an essentially stationary (i.e. low thermal speed) background. Such encounters between particles of substantially different speed are highly inefficient in generating bremsstrahlung radiation: the fast electrons lose around 10 4 -10 5 times more energy through long-range Coulomb collisions than through bremsstrahlung emission of X-rays by shortrange interactions (e.g. Brown 1971). Comparison of the energy emitted in the form of X-rays with the total flare energy release reveals that, under the cold, thick target approximation, a substantial fraction of the total flare energy (possibly as much as 50%) is manifested in accelerated electrons (e.g. Brown 1971;Lin & Hudson 1976;Saint-Hilaire & Benz 2005). This has immediate, important implications for the particle acceleration mechanism: as many as 10 36 electrons must be accelerated to energies greater than 20 keV each second (Hoyng et al. 1976). If the acceleration mechanism is some form of magnetic reconnection in a single site, the acceleration region is limited to very restricted spatial scales, with current sheets only a few kilometres wide or thinner. The very great flux of accelerated electrons then constitutes a number and density problem: accelerating such a large number of electrons in sub-second timescales in such a small region requires a highly effective and efficient accelerating mechanism, which at this time remains theoretically elusive. One proposed solution is that the acceleration does not occur in a single or a few sites, but is distributed over a large number of small reconnection sites throughout the flaring volume (e.g. Turkmani et al. 2005). This somewhat relaxes the constraint of very small acceleration region volume, and so lowers the implied density of fast particles in the accelerating region. However, the overall number problem remains formidable -essentially, the acceleration mechanism is required to accelerate almost all the electrons present in the corona above an active region to energies above 20 keV every second for the duration of the impulsive phase of the flare. An alternative solution would be to reduce the total quantity of energetic electrons required to produce the observed bremsstrahlung X-ray yield. This might be done by re-cycling electrons repeatedly through the accelerator, so that each electron could produce many high energy photons (e.g. Brown et al. 2009). Alternatively, fewer electrons would be required if the bremsstrahlung emission process was more efficient than that envisaged in the cold target scenario. Since the fast electron population is assumed to be 'dilute' in the cold target case, the fast electrons only undergo electron-electron collisions with target plasma electrons of much lower energy. Thus, the fast electrons rapidly lose energy to the target, preventing them from generating further high energy photons through bremsstrahlung emission during interactions with the ions. However, if the high energy particles collide with each other, the collisions merely exchange energy between the fast particles, giving less systematic collisional energy loss from the energetic electron population. The fast electrons can therefore continue to generate high energy photons. Consequently, a population of fast particles thermalising through self-interactions would constitute a more efficient bremsstrahlung source, allowing the observed flare hard X-ray output to be reconciled with a smaller accelerated electron population. This in turn relaxes the stringent requirements on the effectiveness of the acceleration mechanism. In view of this, we now examine an approach to obtaining an approximate analytic solution for the time evolution of a population of self-interacting fast particles. To render the problem more tractable, and to afford a completely analytic solution, we consider a situation with no background plasma electrons and only Coulomb collision interactions exchanging energy between the fast electrons, and with electron-ion encounters giving rise to bremsstrahlung emission. In doing so, we consider a situation more closely analogous to a coronal or loop-top source than a footpoint source. Masuda et al. (1994) first identified a distinct flare hard X-ray source at high altitude in the more tenuous coronal plasma, in addition to the usual footpoint sources located in the denser chromospheric plasma. A study by Petrosian et al. (2002) of loop-top emission in limb flares suggested that coronal HXR sources may be a common feature in all flares, and such sources have recently been reviewed by Krucker et al. (2008). In neglecting the dense chromospheric background plasma, our model is not intended to describe the fast electron behaviour at the footpoint sources, nor the sort of high ambient density coronal source described by Veronig & Brown (2004). However, it can be considered to approximately describe the evolution of a fast electron population in a low-density coronal source, where interactions with the background plasma will be less significant since the background density is several orders of magnitude lower than in the chromosphere, but the intensity of the fast electron beam remains unchanged. Recently, Krucker et al. (2009) have identified in RHESSI data a coronal HXR source that appears to have this character. In Section 2 we outline the derivation of the time-dependent electron distribution function we obtain for this problem. In Section 3 we give some numerical illustrations of the behaviour of this solution, and in Section 4 we compare it to some observations of corresponding solar events. In Section 5 we compare the efficiency of our model to that of the traditional cold, thick target model for the production of hard X-ray photons. Our conclusions are summarised in Section 6. Development of the distribution function for thermalising electrons We consider a plasma with an initial power-law distribution of electron energies and cold (effectively stationary) ions, and study the evolution of the electrons towards a Maxwellian due to Coulomb collisions. The plasma is spatially homogeneous and thermally insulated (i.e. experiences no particle or conductive energy losses), and we neglect energy loss through radiation (justified below). For simplicity we consider only the electrons since the ions will remain cold throughout, and electronion collisions affect mainly the pitch angle of the electrons without substantially altering their energy (e.g. Trubnikov 1965). We assume that the distribution is initially isotropic so that it remains isotropic for all time and that almost all electrons are nonrelativistic. We will not attempt to construct an exact, self-consistent solution. Instead, we seek an approximate distribution function which is analytically accessible but still behaves correctly in the high and low energy regimes. The evolution of the parameters of the distribution and the relative magnitudes of its components will be constrained by the relevant conservation laws. More precisely, we will aim to describe the distribution f (v, t) produced at time t as a population of electrons evolves under binary collisions, given an initial distribution The distribution function f (v, t) has normalisation for all times t. Here n tot is the total electron number density. Equation (1) ensures that this normalisation is satisfied at t = 0. Because of its isotropy, the value of f depends only on speed v. The initial condition (1) resembles the kappa distributions found commonly in space plasmas (e.g. Maksimovic et al. 1997) in that it includes a 'core' with characteristic speed v 0 playing the role of a thermal speed, combined with a power-law tail at high energies. The kappa distribution in turn has been found to be consistent with the electron distributions present in coronal HXR sources (Kašparová & Karlický 2009). Its precise form allows us to carry out the subsequent discussion mostly analytically. As already noted for the kappa distribution by Kašparová & Karlický (2009), it posesses a maximum when rewritten as a distribution per unit energy and thus needs no independent invocation of a "low-energy cutoff". The power-law index p we require to be greater than 5/3 for finite total energy. v 0 acts as a lower 'turnover', ensuring the distribution remains well behaved at v = 0 (cf. Brown & Emslie 1988). We employ the Fokker-Planck formalism for particle evolution under binary collisions (e.g. Rosenbluth et al. 1957;Montgomery & Tidman 1964;Helander & Sigmar 2002). The kinetic equation is where is the electron velocity distribution function. The right hand side of Eq. (3) is the Fokker-Planck representation of the effect of electron-electron collisions. The coefficients for drift (φ ′ ) and diffusion (ψ ′′ ) may be expressed in terms of the Rosenbluth potentials (Rosenbluth et al. 1957): For suprathermal particles (i.e. v ≫ v ′ ), we make the approximation that |v − v ′ | ≈ v and take it out of the integral. For any monotonically decreasing f (such as the power-laws we consider), this approximation will always become adequate for sufficiently large v: the integrand of φ does not diverge as v ′ → v because the surrounding volume element simultaneously tends to zero more rapidly than the term in the denominator (see also e.g. Spitzer 1956;Trubnikov 1965). Using Eq. (2), the Rosenbluth potentials then reduce to Since here the second derivative of ψ(v) is zero, the suprathermal particles experience no velocity diffusion (second term in Eq. 3) and undergo only systematic velocity change (first term in Eq. 3). The kinetic equation now takes the form where τ is a collision time, defined by and we have introduced dimensionless time s = t/τ and speed u = v/v 0 . We have also made f non-dimensional by defining and added the subscript ∞ to emphasise that this solution holds strictly in the high-velocity limit. The characteristic time τ could be evaluated for any speed; the appearance of v 0 in the initial condition (1) makes this the natural choice. It is easy to show that any function solely of the combination is a solution of Eq. (4). Thus, with initial condition (1), the distribution at high energies at some time s > 0 will bẽ Employing this solution for f (v) in the full form of the Rosenbluth potentials does not result in an identically zero ψ ′′ as we have assumed. However, for realistic p values, ψ ′′ (v) is proportional to a large negative power of v, and so will be small in the high v regime of interest to us here. We therefore consider this solution to be adequate for our approximate treatment. The distributionf ∞ given by Eq. (6) is also the one that would develop from our initial condition in the limit of zero ambient temperature if there was background plasma present in the system. For X-ray bremsstrahlung purposes, fast electron evolution is normally calculated in exactly this limit (e.g. Brown 1971;Melrose & Brown 1976; see also Takakura & Kai 1966). f will be very close tof ∞ for large u but the differences will become more and more significant as u becomes comparable to the mean speed of the whole distribution. Ideally we would determine the exact form off from Eq. (3) but we know nonetheless that collisions drive the whole distribution towards an isothermal, Maxwell-Boltzmann form and thatf will attain this form more and more closely as s → ∞.f ∞ conserves neither electron energy, since all electrons lose energy monotonically, nor number, since it implies an unphysical, non-zero flux of electrons out of the system at u = 0. In view of all this we should capture most of the essential physics off by adding a Maxwell-Boltzmann component tof ∞ , writing Hereñ M (s) andT (s) are the density and temperature of the Maxwellian component at time s, normalised to n tot and to mv 2 0 /(2k) respectively. Clearlyñ M (0) = 0.ñ M (s) andT (s) will be determined for s > 0 by appealing to conservation of electron number and energy, as follows. We rewrite Eq. (2) in dimensionless units: Eq. (7) in Eq. (8) immediately impliesñ M (s): Equation (9) describes the evolution over time of the density of the thermal component of the overall distribution: as electrons are removed from the high-energy component they are added the Maxwellian component, increasing its density, such that the total density of the system is conserved. For this thermally insulated system, in which radiation and electron-ion equilibration are neglected, the total energy density The last equality follows because E is constant so it may be evaluated at s = 0 using the initial condition (1). Using (7) we can write the total energy in terms ofT (s): Substituting (9) in (11) and rearranging we find For early times, i.e. s ≪ 1, so for p > 5/3 the temperature is well-behaved at t = 0, justifying our earlier disregard of its initial behaviour. Note that the initial temperature of the Maxwellian component is determined completely by the parameters of the initial power-law, namely its energy and number densities and power-law index. We are not free to specifyT (0) independently. Electrons join the Maxwellian component only as they slow down to u = 0. Energy is being lost from all electrons included in f ∞ , however, soT needs to take a finite value as soon as s > 0, for energy to be conserved. For late times, i.e. s ≫ 1, so all the energy resides in the Maxwellian part of the distribution, i.e. the plasma has completely thermalised. From (12), (13) and (14) we see that the temperature of the Maxwellian component will increase monotonically as the plasma evolves, but only by a factor of order unity for any plausible value of p. Thus, the overall, evolving plasma distribution is described by Eq. (7), withñ M (s) given by Eq. (9) andT (s) given by Eq. (12). The initial distribution is a power-law with a lower turn-over. This thermalises, passing through intermediate stages with a modified power-law and growing thermal component. Eventually the power-law component diminishes completely, and the plasma is described by a purely thermal distribution. Although not a complete description, this approximate treatment will be correct at high velocities and late times and clearly includes, at least qualitatively, the essential physics of the situation. Numerical illustrations; bremsstrahlung radiation In this section we show numerical examples of the complete distribution functionf (u, s) and the resulting bremsstrahlung photon spectra, check numerically that entropy increases monotonically with time and address the neglect of radiation losses. Distribution time evolution The time evolution of the combined, overall distribution is shown in Fig. 1. As may be seen, the distribution begins as a power-law, flattening at low velocities. As time advances and the electrons in the power-law begin to thermalise, the distribution at lower energies takes on Maxwellian form, and the power-law tail diminishes. At late times, the Maxwellian is dominant and only a small power-law population remains, with a smooth intermediate transition to this regime from the initial condition. Entropy For Eq. (7) to be a valid solution, we have to check that the entropy increases with time. The entropy of the distribution is given by Figure 2 shows a plot of the entropy of the electron population as a function of time for our example initial population with a power-law index of 2. As may be seen, the entropy does increase monotonically with time. Similar investigations of the time-dependence of the entropy for power-laws ranging from just greater than 5/3 (the boundary of validity of the solution) to 10 (steep), show that the entropy increase is smooth and monotonic over a range of appropriate power-law indices. Thus the solution does satisfy the increasing entropy criterion. Photon spectrum The initial impetus for this work came from considerations of Xray production efficiency. Is the temporal evolution of hard X-ray emission in this scenario consistent with observations? We consider now the bremsstrahlung spectrum which would be emitted by our evolving electron population. Let ǫ denote photon energy and writẽ Following the usual (thin target) formalism (e.g. Brown 1971) we find that the rate of photon emission is per second per unit photon energy per unit volume of the source region is where dσ/dǫ is the cross-section differential in photon energy (e.g. Koch & Motz 1959). where r e is the classical electron radius, we find where the dimensionless combinatioñ J = r 2 e v 0 τn tot and j = τ n tot j Numerically,J = 6.1 × 10 −8 E 2 0 where E 0 is the energy in keV of an electron of speed v 0 . Figure 3 shows a plot of bremsstrahlung spectra from our electron population with power-law index p = 2 (corresponding to a power-law in electron energy with a spectral index of δ = 2.5), plotted for photon energies 1 <ǫ < 100, and for various values of s. We used the non-relativistic Bethe-Heitler crosssection (Koch & Motz 1959, expression 3BN(a)), for which σ 0 (ǫ, u) = 16 3 1 137 This cross-section yields at least the shape of the spectrum adequately in the 10s of keV regime. The photon spectra reflect the distribution function behaviour shown in Fig. 1, displaying a smooth temporal and spectral transition from the initial condition, with a straight, powerlaw spectrum, to late times, where the Maxwellian dominates the spectrum and the power-law component is minimal. The powerlaw component hardens progressively with time. Neglect of radiation losses With Eq. (17) we may also check the neglect of radiation losses in this discussion. In dimensionless units, the total energy loss rate R(s) to bremsstrahlung is where the last equality follows after substituting (17) and changing the order of integration. With cross-section (18) we may use the integral The integral in (20) gives the mean speed. It can be evaluated explicitly but it is enough to note that it varies only by a factor of order unity and is always O(1) (just asT is, as shown in Eqs. (12) -(14). The total dimensionless electron energy content (10) is also O(1). If we take E 0 = 10 keV,J = 5×10 −6 and we can see immediately, without any detailed time integration, that radiation losses will only become cumulatively significant on times s of order 10 5 . For low enough final temperatures atomic spectral lines will contribute substantially to radiation. If the final temperature is in the range 10 6 − 10 7 K the total radiation loss could be up to an order of magnitude greater, depending on source chemical abundances (Sutherland & Dopita 1993). Even in this case radiation losses may be neglected at least up to s = 10 4 (independent of density, because s is expressed in collision times). Clearly a significant increase in HXR production efficiency is possible. In the next section we look more closely at whether the associated photon spectrum can reproduce observations. Comparison with observations In the previous section we see the characteristic spectral behaviour of the kind of source we model here: high-energy spec- Fig. 4. Reproduction of Fig. 3 from Lin et al. (1981). Hard X-ray spectra from a range of time intervals during the flare of 27 June 1980. The absolute vertical scale corresponds to the uppermost spectrum in each panel. Each successive spectrum has been displaced downwards by two decades to enhance its visibility. However, the relative scalings of the spectra have been preserved. The solid lines are fits by Lin et al., and aid visual interpretation of the spectra. (Reproduced by permission of the AAS.) tral hardening accompanied by the emergence of an isothermal spectrum at low photon energies. High-energy hardening ("soft-hard-harder" spectral behaviour) has been observed in a significant minority of flares (Frost & Dennis 1971;Cliver et al. 1986;Bai & Sturrock 1989;Hudson & Fárník 2002). Kiplinger (1995) found that around 15% of all flares display a soft-hard-harder spectral progression. Observations also exist of flares in which incoherent gyrosynchrotron emission (Melnikov & Magun 1998;MacKinnon 2006) is consistent with high energy electrons undergoing progressive hardening, but X-ray detectors lacked the required sensitivity at the relevant high energies to provide corresponding hard X-ray data for these events. This hardening is interpreted in terms of trapping of fast electrons in a low density, coronal region (e.g. Takakura & Kai 1966;Bai & Ramaty 1979). It occurs in our model for exactly the same reason and on its own would give no decisive test. The simultaneous emergence of the late phase, high temperature component (Fig. 3) would be much more suggestive, however. Lin et al. (1981) discovered a 'super-hot' (∼ 34 MK) thermal component of the hard X-ray spectrum emerging late in the impulsive phase of the 27 June 1980 flare. These observations were made with balloon-borne germanium detectors with 1 keV FWHM spectral response, which may be considered as precursors of the RHESSI detectors. The signature of such a superhot thermal source had been seen before as a slowly-decaying emission component at ∼ 20-25 keV in scintillation counter detectors, but their spectral resolution was too low to confirm its identity as a thermal source. No imaging information was available for this event but such sources now appear to be a sub-class of coronal hard X-ray source (Krucker et al. 2008). Figure 4 shows a summary of the observed X-ray spectra from this event for a set of 15 time intervals over the duration of the flare. These intervals are marked on Fig. 5, which shows the X-ray lightcurves for the event in a number of energy channels, as recorded by a scintillation counter also flown on the balloon. As may be seen, the spectra initially have a power-law form, appearing as a straight line in the log-log plots from 10 to 100 keV. As the event progresses, a departure from the straight powerlaw form may be seen, beginning in the lowest energies at interval 5. This departure grows increasingly prominent in later intervals, taking on a form which is well fitted by a Maxwellian component (Lin et al. 1981), and by interval 10 it is clearly the Overall these observations show similar behaviour to the predicted bremsstrahlung spectra from our model, as shown in Fig. 3, with an initially power-law spectrum giving way to a growing Maxwellian component at low, but increasing, photon energies as the flare progresses. Given the approximate character of our treatment, we do not attempt detailed spectral fitting; but the qualitatively similar behaviour is clear. In the model spectra, the mean intensity of the power-law, non-thermal component decreases with time. In addition, variation in its gradient (spectral hardness) may be seen. Taken over the whole spectrum, a broad characterisation of spectral hardness (as would be measured by low spectral resolution, pre-RHESSI instruments) would suggest that the spectrum softens, since the intensity at low energies is approximately constant but the intensity at high energies decreases. However, the portion of the spectrum which remains power-law in character (i.e. above ǫ ≈ 20) actually hardens, since lower energy electrons thermalise more readily. In a detector with sufficiently high spectral resolution, the power-law type portion of the spectrum could be separately identified and it would be seen to progressively harden. Such behaviour appears consistent with the recent RHESSI study of Shao & Huang (2009), for instance. The intensity of the non-thermal component for the 27 June 1980 flare also decreases with time. However, the behaviour of its spectral hardness is less clear. As may be seen from Fig. 5, the overall flare event featured a number of hard X-ray bursts. (We may consider each burst as a separate instance of the electron thermalisation process we explore in this paper.) Lin & Schwartz (1987) conducted a detailed study of the power-law spectra from these bursts. They found that many displayed the more common soft-hard-soft spectral progression (e.g. Hudson & Fárník 2002;Grigis & Benz 2004), but some showed a progressive hardening. A "super-hot" component is not seen so cleanly in many flare spectra. Krucker et al. (2008), however, identify such phenom-ena with coronal hard X-ray sources, consistent with the picture we develop here. Alexander & Metcalf (1997) conducted a detailed study of the Masuda et al. (1994) event as observed by the HXT instrument on the Yohkoh spacecraft (Kosugi et al. 1991). They characterise this coronal event as also showing the emergence of a high temperature (∼ 40 MK) thermal component from an initially non-thermal source. Figure 6 shows an example of four RHESSI spectra of a coronal source from various intervals over the duration of the occulted-footpoint flare of 4 April 2002 (Jiang et al. 2006). The two spectra in the upper panel of Fig. 6 show the initial powerlaw nature of the emission in the rise, or preheating, and impulsive phases of the flare. The lower panel shows the spectrum from approximately one minute after the impulsive phase observation, exhibiting a clear thermal component joining the hardening power-law. The lower panel also shows a spectrum from approximately two minutes later, in which the emission appears entirely thermal, with any remaining power-law component being lost in noise. The power-law spectral index undergoes progressive hardening over the duration of the first three intervals shown in Fig. 6, before showing an apparent abrupt softening between the third and fourth intervals (Jiang et al. 2006, Fig. 11). The temperature of the fitted thermal component rises between the second and third intervals, reaching a peak of approximately 25 MK (Jiang et al. 2006, Fig. 11), before beginning to cool over the remainder of the flare (by a cooling process which we do not attempt to model). Thus, these observations also show qualitative similarity to our model. Jiang et al. 2006 also discuss 5 other coronal sources in detail. All show the emergence of a thermal component from an initial power-law. The evolutions of the spectral indices of some of these flares are less clear: however, some are clearly soft-hard-soft in nature. The observations of the 27 June 1980 flare shown in Fig. 4 span a period of approximately 12 minutes. Around 3 minutes elapse between interval 1, in which the spectrum appears purely power-law in nature, and interval 10, by which time the thermal component has become the dominant spectral feature. The coronal sources in the study of Jiang et al. (2006) feature an elapsed time of 2 to 3 minutes between the onset of the flare and observations showing the presence of a distinct thermal component. As may be seen from Fig. 3, the model bremsstrahlung spectra show a clearly defined thermal component after a typical elapsed (normalised) time of s = 100. On selecting a coronal density of n tot = 10 10 cm −3 (Jiang et al. 2006), and v 0 consistent with lowenergy cutoffs of 6-12 keV (in line with the values employed in Galloway et al. 2005; see also Kane et al. 1992), a value of 100 for s corresponds to real times of approximately 1 to 3 minutes, which are comparable to the evolution times of the observed flare electrons. Efficiency of bremsstrahlung production As noted above, the low radiative efficiency of the traditional, cold, thick target model leads directly to the high total energy content of flare nonthermal electrons. How much more efficient an X-ray source is the contained, self-interacting electron population? The previous section gives us some guidance as to the relevant parameter regime. In the cold, thick target model the high-energy formf ∞ , given by Eq. (6) is assumed to hold exactly at all energies. The photon flux with this assumption is given bỹ In Figure 7 we compare the evolution in time of dj dǫ and dj tt dǫ for photon energiesǫ =4.5, 6.75 and 9. Again we use the non-relativistic Bethe-Heitler cross-section. Multiplicative scaling constants depending on v 0 have been neglected since we wish only to compare the time behaviour in the two cases, at different photon energies. This temporal behaviour underlines the following findings. Integrating dj tt /dǫ over time, from s = 0 to ∞, gives a finite result, the total thick target yield of photons at energyǫ (Brown 1971). The relative efficiency of the self-interacting population as an X-ray source may be described numerically by where we have introducedj MB to stand for the photon flux from the Maxwell-Boltzmann component off on its own. From the second line of (22) we see immediately that the photon flux from the self-interacting electron population is always greater, in any photon energy range, than from the cold thick target.ñ MB andT and thusj MB tend to finite values as s → ∞, so the total photon flux in any photon energy range may become arbitrarily large if we integrate over longer and longer times. The idealisation of a thermally isolated system thus complicates the comparison with the cold, thick target model. With an appropriately chosen time interval, we can evaluate the relative X-ray efficiency of the self-interacting electron population by numerical evaluation of Eq. (22) (We used the Kramers cross-sectionσ 0 ∼ u −2 here because the triple integrals in the numerator and denominator of (22) may then be simplifed to single or double integrals, speeding up numerical evaluation. Since we only evaluate ratios of fluxes this should not result in any serious error). Taking the flare of 24 June 1980 (Lin et al. 1981) as a guide, we adopt p = 2 and fix v 0 by demanding that the final temperature is 24 MK, in agreement with observations. Then m e v 2 0 /2 = 2.7 keV. If we adopt for illustration n tot = 10 10 cm −3 then τ = 0.14 s. If we then compare the fluxes over a period of 20 s, characteristic of impulsive phase hard X-rays (i.e. until s = 140), and over a photon energy range between ǫ 1 =10 keV and ǫ 2 = 100 keV, then we find R = 7. Extending the comparison to 30 s leads to R = 10. The number of electrons needed would be smaller than that demanded by the cold thick target by a similar factor. Conclusions We have developed an approximate treatment of the thermalisation of a power-law population of electrons as a result of selfinteractions in the absence of an ambient background plasma. The growth of the resulting thermal population is governed by the conditions of particle and energy conservation for the isolated plasma we consider. We have seen that our solution for the overall electron distribution function, as given by Eq. (7), satisfies the basic stipulations of the problem, i.e. a smooth transition from initial power-law to Maxwellian-dominated regimes, with relaxation occurring by collisions between the fast particles, and with monotonically increasing entropy. This solution corresponds to an evolving electron population which is more efficient in producing hard X-rays than the cold, thick target model. While a precise evaluation of the specific efficiency enhancement afforded by this process is hampered by some of the assumptions made in this treatment, we may nevertheless make an approximate comparison to the cold, thick target situation. We find that our model requires approximately a factor of 7 -10 fewer accelerated electrons than the cold, thick target model to generate an equivalent hard X-ray photon flux. Thus, our model may alleviate some of the existing heavy requirements on the flare fast electron acceleration mechanism. Eq. (10) predicts a simple relationship between the initial spectral index, measured at early times before the spectrum has thermalised, and the temperature of the thermal component that becomes dominant late in the evolution of the source. Both quantities measure the mean energy per particle and only conservation of total energy is needed for this relationship to be satisfied. With a large enough sample of sources this would provide a very good test of this picture. More generally, many properties of the model are fixed by p and v 0 . The density is only needed to normalise to a given, total emission measure, and to determine the (dimensional) timescale for evolution. The self-interacting source population's efficiency is limited in practice by the timescales for radiative or conductive energy loss, and by the validity of our assumption of perfect trapping. The latter may not be quite as unrealistic as it appears. Krucker et al. (2009) report the observation of an apparently contained source in which nonthermal electrons are dominant. A detailed comparison of our model with at least the decay phase of this event will be carried out in future work. Electrons accelerated at or near a magnetic null or region of very low field strength might be naturally contained near the acceleration site by the inevitably very high mirror ratios (Fletcher & Martens 1998). A full evaluation of the validity of this approximate analytical treatment would depend on comparisons with a numerical solution using the complete expressions for the Rosenbluth potentials, which would be a non-linear problem. However, the suggestive similarity of the predictions of this initial model to some aspects of observed spectra indicates that a more complete treatment, in company with corresponding flare models, would be worth pursuing. Such an approach, incorporating additional properties such as energy losses from the thermal component, would facilitate a detailed, quantitative comparison between this model and the cold, thick target model. The consequences for the flare energy budget could then be investigated more fully. Such enhancements notwithstanding, these initial results nevertheless qualitatively reproduce the growth of a thermal component from a non-thermal, power-law HXR source over comparable timescales to those seen in some flare observations. Thus, they may provide useful insight into the evolution of flare electrons, particularly in coronal sources and those featuring a softhard-harder spectral progression.
Enter the Dragon: The Dynamic and Multifunctional Evolution of Anguimorpha Lizard Venoms While snake venoms have been the subject of intense study, comparatively little work has been done on lizard venoms. In this study, we have examined the structural and functional diversification of anguimorph lizard venoms and associated toxins, and related these results to dentition and predatory ecology. Venom composition was shown to be highly variable across the 20 species of Heloderma, Lanthanotus, and Varanus included in our study. While kallikrein enzymes were ubiquitous, they were also a particularly multifunctional toxin type, with differential activities on enzyme substrates and also ability to degrade alpha or beta chains of fibrinogen that reflects structural variability. Examination of other toxin types also revealed similar variability in their presence and activity levels. The high level of venom chemistry variation in varanid lizards compared to that of helodermatid lizards suggests that venom may be subject to different selection pressures in these two families. These results not only contribute to our understanding of venom evolution but also reveal anguimorph lizard venoms to be rich sources of novel bioactive molecules with potential as drug design and development lead compounds. Further, Sweet's assertion that the genetic support for Toxicofera as a clade is erroneous due to short-branch/long-branch attractions [10], is a flawed argument as long-branch attraction is not a serious issue for recent likelihood-based methods (as it was with older parsimony based methods), particularly when good model-selection procedures are used. Also, for DNA sequence data, long branch attraction is not expected to be an issue where more than 20% of the characters favour that topology (as is the case in the Toxicofera situation [11][12][13][14][15][16][17][18][19][20][21][22]). Indeed Sweet himself notes [10] that only a minority of the evidence cited in a review by Sites, Reeder & Wiens [37] did not support the grouping of anguimorphan and iguanian lizards as part of the Toxicofera clade. Thus Sweet's main argument against Toxicofera is one of inertia due to the weight of history, as demonstrated by his statement "that dozens to hundreds of morphological, behavioral and ecological synapomorphies must be reversed is strong evidence of a false signal in the genes supporting the short Toxicoferan branch" [10]. Naturally the historical morphological, behavioural and ecological observations were viewed through the filter of the understanding of the organismal relationships at the time. The fact that these observations will have to be reinterpreted due to genetic evidence overturning morphology-driven taxonomy, should not be an impediment to the acceptance of newer, more robust data regarding the taxonomical arrangements. Just as the historical ecology and predatory observations of varanid lizards were viewed through the filter that these lizards were assumed to be non-venomous due to the perceived lack of venom glands, need to be reinterpreted (and even redone) in light of the discovery that varanid lizards in fact posses complex oral glands homologous to that of helodermatid lizards and secrete many of the same proteins with similar bioactivity [23][24][25][26][27][28][29][30][31][32][33][34]. Therefore, the weight of history should not stand in the way of scientific advancement. This lack of acceptance of more lizards being venomous than previously understood, is mirrored by the reluctance of some [35,38] to accept that all advanced snakes share a common venomous ancestor [25,29,34,36,39]. Importantly, more recent phylogenetic analyses that have combined morphological and molecular data, and simultaneously attempted to resolve the topological conflict by careful analysis and treatment of its source in the data, have again recovered Toxicofera as a robustly supported clade and uncovered new morphological synapomorphies of the clade [14,21]. Controversy regarding the phylogenetic validity of Toxicofera is often intertwined with that surrounding hypotheses concerning the evolution of venom within the clade, a situation compounded by use of the poorly defined term "Toxicofera hypothesis". Although Hargreaves et al. (HEA [40]) used this term to refer to the "single early evolution of venom in reptiles", the original publication concerning this "early origin" hypothesis almost exclusively prefers the term "venom system" over "venom" [23]-thus the original "Toxicofera hypothesis (of venom evolution) concerns the single early origin of the venom system, not "venom" per se. In that all lineages possessed uniquely derived mandibular and maxillary glands distinguished by having segregated protein and mucus secreting regions, with the enlargement of the protein-secreting region relative to that of the mucus-secreting region. More broadly, however, the term "Toxicofera hypothesis" could refer either to a hypothesis concerning the phylogenetic existence and constituency of the Toxicofera clade, or one of several interpretations of the evolutionary history of venom systems within the clade. Regardless of evidence or views on venom evolution within the clade, the weight of evidence (whether morphological or molecular) strongly supports the existence of the clade Toxicofera. It should be noted that the Hargreaves study [40] challenging the evolution of venom in lizards relied upon unreliable data such as tissues expression values up to 5000-fold in variance within a single set of replicates through to averaging only n = 2 including some in which one of the values was 0 (Supplementary Tables S5-S9 of [40]). Further to this the trees presented in the paper [40] Figures 5 and 6, and Supplementary Figures S1, S2, S4a, S7, S9, S12, S14 and S16-S18 contained nodes Toxins 2017, 9, 242 4 of 37 of less than 50% support or have the inclusion of nodes without support values. Had node support been presented in the main paper with the standard protocol of collapsing nodes below 50%, their trees would be largely polytomies with very little topological information. Given that many of their conclusions were based on the topology of those trees, we must consider that those conclusions were not supported by their data or analyses. Therefore their assertion that many reptile venom proteins are simply expression of 'housekeeping genes', and therefore calling into question the single early origin of the venom system, is not phylogenetically supported. Some of the controversy surrounding the "Toxicofera hypothesis" (sensu HEA) of venom evolution has been blamed on inconsistencies in definitions of the word "venom" and the designation of certain species as "venomous". In reality, the various definitions of "venom" differ little-the consensus is that venom is an actively delivered (e.g., via a bite or a string) secretion that functions (i.e., has been selected for the purpose of) in the subjugation of prey or the deterrence of predators/competitors [29,38,41]. A popular wording of this definition [42], sometimes considered to be more restricted than more recent formulations [43], introduces a point of confusion by including "subjugation and/or digestion and/or in defence", which suggests that digestion is a distinct function of "venom". This would greatly complicate matters, as oral secretions (e.g., human saliva) in general aid digestion. Here, we prefer the more restricted definition-whilst digestion is undoubtedly a function of the oral secretions of the lizards examined in the present study, that fact alone does not qualify the secretions as "venoms". Confusion has thus been fueled less by differing definitions of "venom", than by different applications of the term, and particularly its conflation with the term "venom system". This confusion stems from the use of the two terms interchangeably in papers concerning Toxicofera going back at least as far as 2006 [23]. It must be stressed that taxa possessing a "venom system" are not necessarily "venomous"-in particular we have frequently referred to the venom system of iguanian lizards as "incipient" (e.g., [30] and see [41] for a discussion of the use of the term incipient in evolutionary contexts). So, the "Toxicofera hypothesis of venom evolution", more properly considered (although this discussion does not constitute a formulation of such a hypothesis), concerns the early evolution of the venom system-i.e., the synapomorphy of the Toxicofera clade, which (as above) is the presence of protein-secreting oral glands (i.e., an incipient venom system) which may be considered exapted for the subsequent development of sophisticated venom delivery systems. In making a selectable contribution to the subjugation of prey, it is not necessary for a venom to be capable of rapidly killing or incapacitating the prey item. Venoms do not necessarily function as stand-alone prey subjugation mechanisms-they may be used in concert with physical means of subjugation. Thus, a venom system need only marginally increase a predator's chances of successfully securing a meal, e.g., by slightly weakening a potential prey animal, making it easier to subdue physically [41]. When the effects of a varanid lizard venom are contrasted with those of the venom of a highly front-fanged snake such as a taipan as an argument against such lizards having a 'true' venom [43], the differences are indeed stark but this is a spurious comparison. This does not constitute a strong argument against the lizard being venomous, instead it only serves to illustrate the fact that venom systems exist in myriad states of development within Toxicofera, as they do within the advanced snakes themselves [39]. Criticisms of (some variants of) the Toxicofera hypothesis of venom evolution notwithstanding, it is clear that a core set of toxins are present in the venoms of all anguimorph lizards studied to date including CRiSP, kallikrein, B-type natriuretic peptides and type III phospholipase A 2 (PLA 2 ) [23][24][25][27][28][29][30][31][32][33][34]. It is an important point that the presence of a particular species of protein within a secretion does not demonstrate the function of that secretion, which is why the evolution of venom must be studied by considering the venom system as a whole-functional assays should be deployed, as well as considerations of associated anatomy (i.e., delivery mechanisms) and organismal feeding ecology. Some of the previously listed toxins are responsible for the hypotensive effect of intravenous injection of crude venom in rats [22,32,33], such as aortic smooth muscle relaxation by natriuretic peptides equipotent to forms recovered from venomous snakes [44]. In addition, kinin Toxins 2017, 9, 242 5 of 37 release from kininogen by kallikrein enzymes is another source of hypotensive effects resulting from lizard venoms [45]. In addition, it has also been previously noted that injection of rodents and birds with V. griseus venoms resulted in paralysis [46]. Reports of human envenomation by monitor lizards have been noted both in the literature and anecdotally. A bite report documented effects for a V. griseus bite to a zoo keeper that included symptoms similar to that of the bird and rodent studies: dizziness, muscular weakness and soreness in the extremities, facial and eye muscle pain, respiratory distress and pain (local and systemic) [47]. Additional reports of V. griseus bites to zoo keepers recorded similar symptoms including dysphagia, chest tightness, muscle soreness in the extremities, facial pain, dizziness, and difficulty walking [48]. In all the above V. griseus cases, the bite victims were experienced zoo keepers who did not view the bites with concern upon their occurrence. Vikrant and Verma (2014) reported a lethal bite by the related V. bengalensis that induced local pain, blood loss, as well as nausea, diaphoresis, dizziness, and breathlessness in the victim and eventually led to an acute kidney injury and cardiac arrest [49]. The actual culprit responsible for this bite has been questioned by clinical toxinologists, who considered it more likely to be a venomous snakebite [50]. However, it must be noted that this dissenting opinion by people who were not involved in the case did not provide any new and contradictory evidence. In contrast, a local wildlife officer who was present at the event confirmed the identity of the monitor lizard and the attending physicians documented the bite wounds as being inconsistent with the puncture wounds characteristic of bites by venomous snakes but rather consistent with the lacerations produced by monitor lizard bites (Vikrant personal communication). Thus, while this case is extraordinary, the possibility of dangerous bites from varanid lizards to humans, under exceptional circumstances, should not be discounted. Another case report described a bite by a juvenile V. komodoensis that led to faintness, prolonged bleeding and transient hypotension [51]. The authors attributed these effects to a vasovagal reaction despite noting themselves the similarity to reported in vitro effects of V. komodoensis venom [33] and with vasovagal reaction unable to explain the prolonged bleeding. Anecdotally, a great many varanid lizard bites to biologists, zookeepers and amateur reptile enthusiasts have resulted in little that could be attributed to the action of toxins; however, some bite victims do report burning sensations, prolonged bleeding and inflammation disproportionate to the mechanical damage inflicted [10]. In addition to published cases, BGF has been contacted by keepers of varanid species who had symptomatic cases, sometimes with taxon-specific effects: 3 cases of V. albigularis reporting extreme muscle soreness and weakness lasting days, 4 cases of V. kordensis producing significant local swelling of the bitten finger and adjacent fingers, 5 cases of V. varius with pronounced bleeding lasting 3-4 h, a V. salvadorii bite with similar symptoms to that of V. varius and a V. komodoensis bite also producing apparent anticoagulant effects. These effects are consistent with the characterised venom chemistry of anguimorph lizards [23,29,32,33]. When compared with helodermatid lizard bites, which have received far more research attention than those of their varanoid cousins, a key point becomes evident: duration of contact. Typically, a helodermatid lizard will stay attached, chewing more venom into the bite site, while a monitor lizard is less likely to hold onto something that is not a food item but in some cases may hold on for up to a half hour when defensively biting. This likely leads to differences in the amount of oral fluids inoculated to the victim, with symptomatic human bites typically being those in which the varanid lizard chewed for a prolonged period of time. Interestingly, while feeding on large prey items, varanid lizards seem to have a tendency to shake it violently and chew vigorously until it is subdued, which may facilitate venom delivery as well as potentiating mechanical damage [52]. Varanoid lizards are characterised by a refined mandibular venom gland that is homologous with that of the helodermatid lizards [23,25,27,29,32,33,53]. Most anguimorph lizards have simple-structured mandibular venom glands, however, Heloderma and Lanthanothus/Varanus have independently evolved complex glands [32]. Both lineages derived their compartmentalised venom glands from the ancestral anguimorph lizard condition of an enlarged, mixed sero-mucous gland in which the protein-and mucous-secreting regions are not segregated into distinct glandular structures. In both cases, sophisticated structures with separated protein and mucous regions, a structured lumen for storing liquid venom, and a thick membranous cover, have evolved. Further to this, morphological as well as molecular evolutionary studies have indicated that these glands are homologous with the venom glands of snakes [23,25,27,29,32,33,53]. The fact that varanid lizards possess highly developed dental glands suggests that those glands in one way or another play an important role in their life. It has been shown with venomous snakes which switch to eggs or constriction as an alternate form of prey capture rapidly lose the functionality of their venom glands, with atrophying occurring in short periods of (evolutionary) time [25,29,54,55]. The evolution of a complex venom system is likely only possible under certain contingent circumstances-i.e., when both environmental conditions select for it and a species' overall evolutionary trajectory facilitates it. For example, in Iguania the incipient venom glands never developed any significant complexity, probably due to the mainly insectivorous/herbivorous nature of these lizards. In addition, when a species evolves an alternative method of subduing prey that renders the venom system redundant, or switches to a diet with no need for subjugation (e.g., plants), the venom system often degrades [25,29,54,55]. The cost of venom production is presumably high enough to justify the presence of active secretory and delivery apparatuses only when it confers an evolutionary advantage [56]. It is notable, however, that in iguanian species which include a large quantity of vertebrates in their diet, the glands are larger and the protein-secreting region more developed (though we stress that this is not equivalent to them being 'venomous' per se but it is strongly suggestive of a functional role in predation) [29,30]. Previous work on helodermatid lizards has demonstrated a striking level of proteomic conservation within the venoms of this clade, with the same basic toxin groups present in similar quantities [27,57] despite the most recent common ancestor (MRCA) of the five Heloderma species existing~15-20 million years ago [11]. While the venom glands of Varanus species have been compared transcriptomically, the only proteomic comparisons to-date were limited to SELDI mass spectrometry [23,32,33]. However a diversity of components have been functionally characterised from helodermatid and varanid venoms (Table 1), and since studies have demonstrated the complexity and medicinal potential of Heloderma venom [27,[58][59][60][61][62][63][64] it is of interest to study the venom system of varanoid lizards in detail. Toxins 2017, 9,242 6 of 38 morphological as well as molecular evolutionary studies have indicated that these glands are homologous with the venom glands of snakes [23,25,27,29,32,33,53]. The fact that varanid lizards possess highly developed dental glands suggests that those glands in one way or another play an important role in their life. It has been shown with venomous snakes which switch to eggs or constriction as an alternate form of prey capture rapidly lose the functionality of their venom glands, with atrophying occurring in short periods of (evolutionary) time [25,29,54,55]. The evolution of a complex venom system is likely only possible under certain contingent circumstances-i.e., when both environmental conditions select for it and a species' overall evolutionary trajectory facilitates it. For example, in Iguania the incipient venom glands never developed any significant complexity, probably due to the mainly insectivorous/herbivorous nature of these lizards. In addition, when a species evolves an alternative method of subduing prey that renders the venom system redundant, or switches to a diet with no need for subjugation (e.g., plants), the venom system often degrades [25,29,54,55]. The cost of venom production is presumably high enough to justify the presence of active secretory and delivery apparatuses only when it confers an evolutionary advantage [56]. It is notable, however, that in iguanian species which include a large quantity of vertebrates in their diet, the glands are larger and the protein-secreting region more developed (though we stress that this is not equivalent to them being 'venomous' per se but it is strongly suggestive of a functional role in predation) [29,30]. Previous work on helodermatid lizards has demonstrated a striking level of proteomic conservation within the venoms of this clade, with the same basic toxin groups present in similar quantities [27,57] despite the most recent common ancestor (MRCA) of the five Heloderma species existing ~15-20 million years ago [11]. While the venom glands of Varanus species have been compared transcriptomically, the only proteomic comparisons to-date were limited to SELDI mass spectrometry [23,32,33]. However a diversity of components have been functionally characterised from helodermatid and varanid venoms (Table 1), and since studies have demonstrated the complexity and medicinal potential of Heloderma venom [27,[58][59][60][61][62][63][64] it is of interest to study the venom system of varanoid lizards in detail. Figure 1. Organismal relationships, sizes and ecological niches occupied (blue = aquatic, brown = terrestrial, green = arboreal). Phylogeny based on [65][66][67]. Note that as it only includes species from this study, it excludes the other anguimorph lizards such as anguids that intervene between Heloderma and Lanthantus/Varanus. Red lineages are terrestrial, green are arboreal, and blue are aquatic. Therefore, the aim of this study was to undertake a proteomic and functional comparison of varanoid lizard venoms in order to uncover patterns of venom evolution across the clade and to add to the body of knowledge regarding their controversial evolution. In view of treating the varanid lizard venom apparatus as an integrated system, the teeth of each species were also examined. Varanid lizards belong to the lizard clade Anguimorpha that also contains anguid and helodermatid Figure 1. Organismal relationships, sizes and ecological niches occupied (blue = aquatic, brown = terrestrial, green = arboreal). Phylogeny based on [65][66][67]. Note that as it only includes species from this study, it excludes the other anguimorph lizards such as anguids that intervene between Heloderma and Lanthantus/Varanus. Red lineages are terrestrial, green are arboreal, and blue are aquatic. Therefore, the aim of this study was to undertake a proteomic and functional comparison of varanoid lizard venoms in order to uncover patterns of venom evolution across the clade and to add to the body of knowledge regarding their controversial evolution. In view of treating the varanid lizard venom apparatus as an integrated system, the teeth of each species were also examined. Varanid lizards belong to the lizard clade Anguimorpha that also contains anguid and helodermatid lizards [13,14,67], of which the gila monster (Heloderma suspectum) and Komodo dragon (V. komodoensis) are the most well-known species. Varanus is the sole extant genus in the family Varanidae and its closest extant relative is Lanthanotus borneensis (Borneo earless monitor-Lanthanotidae), also included in this study. Together Varanidae and Lanthanotidae (along with numerous extinct taxa) comprise the superfamily Varanoidea, of which Heloderma was previously also considered a member. Lizards in the genus Varanus are squamate reptiles with total body length ranging from 23 cm for adult V. brevicauda to over 3 m for V. komodoensis ( Figure 1). Teeth Scanning Electron Microscopy In the 20 species of varanoid lizard examined (Figure 1 Proteomics Studies Proteomic analyses revealed the venoms to contain a large diversity of components, with kallikrein predominating in all species except V. griseus and V. varius, suggesting it plays a lesser role in these species (Figures 4 and 5, Table 2). Full MS/MS data are available in Supplementary Tables S1. 2D gels show that the kallikrein toxins possess significant diversity in isoelectric points and molecular weight diversity ( Figure 5), ranging from the narrow but dense spots of V. giganteus ( Figure 5B) and V. mertensi ( Figure 5E) through to the evolution of differential forms in V. prasinus ( Figure 5G) and extending to poorly staining low levels in V. griseus ( Figure 5D) and V. varius ( Figure 5L). There were significant kallikrein expression variations in closely related taxa, ranging from broad isoelectric point variation in V. scalaris ( Figure 5J) relative to V. tristis ( Figure 5K), through to downregulation in V. varius ( Figure 5L) relative to V. salvadorii ( Figure 5I). L. borneensis venom displayed a 1D pattern more similar to the putatively ancestral type shared with Heloderma species (Figure 4) [57]. In contrast, the varanid 1D gels (Figure 4) revealed extensive variation is detectable outside regions of the gel corresponding to kallikrein (which remains relatively invariant across species). Chitinase/chitotriosidase were present in all the venoms of the dwarf species, but virtually absent from those of larger species (Figures 4 and 5, Table 2). point variation in V. scalaris ( Figure 5J) relative to V. tristis ( Figure 5K), through to downregulation in V. varius ( Figure 5L) relative to V. salvadorii ( Figure 5I). L. borneensis venom displayed a 1D pattern more similar to the putatively ancestral type shared with Heloderma species (Figure 4) [57]. In contrast, the varanid 1D gels (Figure 4) revealed extensive variation is detectable outside regions of the gel corresponding to kallikrein (which remains relatively invariant across species). Chitinase/chitotriosidase were present in all the venoms of the dwarf species, but virtually absent from those of larger species (Figures 4 and 5, Table 2). Kallikrein Molecular Evolution The molecular phylogeny of the available anguimorph kallikrein sequences showed evidence of gene duplication and diversification ( Figure 6). In contrast to a previous study challenging kallikrein as a shared toxin type between snakes and lizard venoms based on the non-monophyly of toxicoferan oral gland sequences relative to non-toxicoferan oral gland and body tissue forms [40], we recovered all toxicoferan gland sequences as a monophyletic group relative to sequences from body tissues or from non-toxicoferan species. As the authors of that study did not provide their alignments or run files despite repeated requests by us, we cannot ascertain the cause of the discrepancy as due to either alignment issues with that previous study or differential phylogenetic methods. Regardless, our tree [40]. Molecular modelling demonstrated the overall dN/dS value for anguimorph kallikrein toxins for which nucleotide data are publicly available was 0.80, indicating that the protein as a whole has been subject to neutral or weak purifying selection. Under negative or purifying selection, less "fit" nonsynonymous substitutions accumulate more slowly than synonymous substitutions, and under diversifying or positive selection, the converse is true. However, the FUBAR (Fast Unconstrained Bayesian AppRoximation) and MEME (Mixed Effects Model of Evolution) methods detected a number of individual sites on the toxin surface ( Figure 7) that are likely to have been subject to diversifying selection. This suggests that these sites may be important in the coevolutionary arms race between anguimorphs and their prey and may well play a role in the function of the toxins [87]. It should be noted that the analyses were conducted without the inclusion of a H. horridum sequence originally stated containing an insert characteristic only of snake venom forms [84] but which was subsequently shown to be erroneous [32]. However in the intervening time, this sequencing error led a study to conclude that convergent evolution had occurred between lizard and shrew toxin forms [88] and has been repeatedly cited as evidence for such convergence . Outside of the core kallikrein enzyme, other widely expressed protein types included CRiSP, lysosomal acid lipase and PLA 2 . However, in contrast to the in general strong presence of kallikrein, these were more variable. Shotgun MS/MS revealed the presence of additional toxin types not discernible by gel-based methods such as the small natriuretic peptides ( Table 2). Lysosomal Acid Lipase Molecular Phylogeny Molecular phylogeny revealed the anguimorph lizard lysosomal acid lipase sequences to form a clade distinct from those sequenced from snake venom glands (Figure 8). Bioactivity Testing Bioactivity testing revealed dynamic variation in activities between and within each assay type with statistical significance between species for each assay ( Figure 9, Table 3 Kallikrein Enzymatic Activity upon Fluorescent Substrates Consistent with the evidence for kallikrein molecular diversification, the ability of the venoms to cleave serine protease substrates varied significantly among the species studied but with phylogenetic patterns evident ( Figure 10). In addition, some species' venoms were potent on one substrate but not the other, with V. mitchelli the most potent upon both substrates. Fibrinogen Cleavage Gels As kallikrein toxins isolated from Heloderma venoms have been shown to exert non-clotting, destructive cleavage of fibrinogen [79,85], we investigated the relative time-dependent actions of venoms in this study for such actions (Figures 11-13). Intriguingly, differences in cleavage products indicated differential cleavage sites. A consistent pattern emerged in which all venoms displayed some ability to cleave the Aα chain, while the Bβ chain was cleaved more slowly and only fully destroyed by the most potent Aα chain acting venoms. High relative Aα chain activity predicted correspondingly high relative Bβ chain activity (PGLS: t = 7.3532, p = 7.969 × 10 −7 ). Aα chain cleavage was well predicted by activity upon the substrate S-2302 (PGLS: t = 2.8968, p = 0.009611, Figure 14) as was Bβ chain activity (PGLS: t = 2.8459, p = 0.01073, Figure 15). However, the γ chain was untouched even in species which fully consumed the Bβ chain. Phospholipase A 2 Enzymatic Activity upon Fluorescent Substrate V. varius venom had a dramatically higher level of PLA 2 enzymatic activity than other species tested (Figure 16), and thus obscured that other species displayed significant levels of PLA 2 activity consistent with the previous transcriptomic, proteomic or functional evidence for the presence of this enzyme type [23,32,86]. Rat Ileum Contraction Organ Bath Assay V. varius venom demonstrated strong smooth muscle contracting activity ( Figure 17) consistent with the previously documented presence of AVIT and cholecystoxin peptides and the demonstrated action for snake venom homologues of this toxin type [32,33,113]. Due to lack of sufficient venom, other species were not tested. Ancestral state reconstructions over branches comparing substrate consumption relative to alpha chain destruction. Bars indicate 95% confidence intervals for the estimate at each node. Note that due to the high dynamicity of venom evolution these quickly become broad as one moves down the tree. Phylogeny follows [65][66][67]. Figure 14. Ancestral state reconstructions over branches comparing substrate consumption relative to alpha chain destruction. Bars indicate 95% confidence intervals for the estimate at each node. Note that due to the high dynamicity of venom evolution these quickly become broad as one moves down the tree. Phylogeny follows [65][66][67]. Toxins 2017, 9, 242 22 of 38 Figure 15. Ancestral state reconstructions over branches comparing substrate consumption relative to beta chain destruction. Bars indicate 95% confidence intervals for the estimate at each node. Note that due to the high dynamicity of venom evolution these quickly become broad as one moves down the tree. Phylogeny follows [65][66][67]. Figure 15. Ancestral state reconstructions over branches comparing substrate consumption relative to beta chain destruction. Bars indicate 95% confidence intervals for the estimate at each node. Note that due to the high dynamicity of venom evolution these quickly become broad as one moves down the tree. Phylogeny follows [65][66][67]. Figure 16. Ancestral state reconstructions over branches for PLA2 enzyme substrate consumption where warmer colours represent more fluorescence production. Bars indicate 95% confidence intervals for the estimate at each node. Note that due to the high dynamicity of venom evolution these quickly become broad as one moves down the tree. Phylogeny follows [65][66][67]. where warmer colours represent more fluorescence production. Bars indicate 95% confidence intervals for the estimate at each node. Note that due to the high dynamicity of venom evolution these quickly become broad as one moves down the tree. Phylogeny follows [65][66][67]. Toxins 2017, 9, 242 23 of 38 Figure 16. Ancestral state reconstructions over branches for PLA2 enzyme substrate consumption where warmer colours represent more fluorescence production. Bars indicate 95% confidence intervals for the estimate at each node. Note that due to the high dynamicity of venom evolution these quickly become broad as one moves down the tree. Phylogeny follows [65][66][67]. Discussion The venoms of varanoid lizards remain understudied in evolutionary toxinology; however, multiple sources of evidence point to the adaptive evolution of venom in varanoid lizards. The present study revealed the differential complexity of anguimorph lizard venoms across a wide taxonomical range ( Figure 1) and considered its evolutionary context as part of a combined predatory arsenal. Prior to this study, tooth form and function have been reported for very few species of varanid lizards [1,114,115] and no study has compared tooth morphology and function across many species. The presence of serrations can be important indicators of tooth function in reptiles. Serrated teeth cut compliant material by presenting less contact area with the material, and thus the applied force at each point of contact is relatively greater. Serrations also act to trap and cut sections of material with a 'bite and slice' mechanism allowing a sliding force to be transferred into a cutting force [116]. Serrations have been reported for the teeth of V. komodoensis, V. salvator, and V. varius [1,114,115]. Juvenile V. niloticus also appear to bear serrations on the anterior and posterior edges, but these are lost in the adult form, to become smooth and blunt, reflecting a change in diet from insectivorous to molluscivorous in adults [117]. Slightly serrated teeth appear to be the plesiomorphic condition of varanid lizards (Figures 2 and 3). Significant diversification from the inferred plesiotypic tooth state was evident in two groups: the V. varius clade and members of the Odatria subgenus of dwarf monitors (V. acanthurus, V. baritji, V. gilleni, V. mitchelli, V. scalaris, and V. tristis) (Figure 3). Each of these groups also possesses morphological features and predatory strategies characteristic of the particular clade. V. komodoensis, V. salvadorii and V. varius had the most pronounced serrations, consistent with these species typically feeding on thick-skinned large mammalian prey that they dismember. Thus while there were not any evident direct links to venom chemical composition and serrations, there was for prey dismemberment strategies. The aquatic lineage V. mertensi thus represents a secondary loss of serrations. The dwarf species (Odatria) have also secondarily lost the serrations, with the sole exception within the clade being the arboreal lineage V. scalaris which has subsequently re-evolved serrations. The reasons for the unique (amongst Odatria) evolution of serrated teeth in V. scalaris may be linked to the inclusion of large, chitinous insects in its diet, which may require dismemberment before ingestion, or may be linked to active defence of territory from conspecifics [118]. In addition, this species predates upon the large treefrog Litoria splendida in its range, which it dismembers, eating only the legs and avoiding the large toxin-secreting parotid glands (Fry personal observations). The extreme proteomic variability of varanid venoms, in contrast to the highly conserved nature of helodermatid venoms (Figures 4 and 5), taken in concert with the evidence of duplication and diversification of toxin genes such as kallikrein within Varanus (Figures 6 and 7) and other toxin types analysed previously [119]), is suggestive of adaptive evolution as a generator of toxin diversity in Varanus venoms. The venoms studied contained myriad toxins, although with the exception of kallikrein none of those components were consistently highly expressed in the venom. Thus, a kallikrein-dominated venom is inferred to be reflective of the condition of the venom system of the anguimorph lizard MRCA. These functional and proteomic results are congruent with previous transcriptome studies which recovered kallikrein as the dominant toxin type [23,27,32,33]. Kallikrein enzymes have clearly undergone significant structural and functional diversification within the clade. There was evidence of gene duplication ( Figure 6) and diversification of the molecular surface biochemistry (Figure 7). The cleavage of the two fluorescent substrates were incongruent with each other, thus indicating variations in enzyme specificity ( Figure 9A,B and Figure 10). Similarly the actions upon fibrinogen produced variations in degradation products even when the same chains were targeted, also indicative of diversification of enzyme specificity (Figures 11-13). Thus, there is strong evidence for adaptive evolution of kallikrein toxins in the venoms of anguimorph lizards. The venoms of larger species of monitor were generally seen to be more complex than that of their smaller congeners (Figures 4 and 5), which may be explained by the broader dietary range in larger species, both of the adult animals and across the life history. Thus, while multifunctional kallikrein enzyme toxins form the venom core, there has been additional diversification in the chemical composition of the venoms. As dietary studies of V. varius show [120], it has the broadest possible dietary range, feeding on everything from small invertebrates and eggs to medium-sized mammals, most likely experiencing ontogenetic niche shifts, which further necessitate adaptation to different prey items. This species also has the most distinct venom of all those in this study, with a secondary downregulation of kallikrein paralleled by an upregulation of other toxins which produce effects ranging from muscle contraction in this study ( Figure 15) through to platelet-inhibition mediated anticoagulation shown previously [33]. Smaller species (members of the Odatria subgenus), on the other hand, predominantly feed on lizards and insects throughout their life [121]. These species retain potent fibrinogen destruction activities in their kallikrein toxins with the exception of V. gilleni (Figures 10-12), the smallest species studied. It is notable that V. gilleni had the least amount of fibrinogen-targeting activity despite being rich in kallikrein toxins and having a stronger action upon substrate S-2302 than some other species which displayed stronger fibrinogen destruction activity (Figure 8). Other species were also rich in kallikrein on the gels but without corresponding activity upon either of the substrates tested. The kallikrein action in these species may instead be linked to high kininogen cleavage activity, with liberated kinins causing a rapid hypotensive effect in envenomed prey/predators. As this was beyond the scope of this study, future work on anguimorph lizard venoms should include assaying the relative release of kinins from kininogen by these venoms and the role this plays in the induction of hypotension. Previous transcriptome studies revealed the presence of kallikrein transcripts as the dominant types in venom gland transcriptome of varanid lizards and snakes, and these toxins have been inferred as present in the oral glands of the Toxicofera MRCA [23,24,[27][28][29][30][32][33][34]. Characterised toxicoferan venom kallikreins induce fibrinogen depletion (Figures 11-13) in the prey organism and thus promote prolonged bleeding [24] as well as induction of hypotension through the release of kinins from kininogen [45]. Predators likely benefit from inducing blood loss or altering the blood pressure of their prey, as this will increase the chance of successful subjugation by weakening the prey. Previous work has shown that the Type III PLA 2 in V. varius venom block platelet aggregation, thus interfering with blood clotting via the same pathway as do homologous toxins in the venom of Heloderma species [33]. This species also had extremely high levels of PLA 2 activity in comparison to all other species (Figure 16). In the case of some monitor lizards, in particular the larger species such as V. varius or V. komodoensis, this type of toxic action might be beneficial even if a prey manages to escape the initial attack but succumbs to blood loss or shock in the aftermath. Field studies on V. komodoensis have indeed shown such post-bite mortality in a significant percentage (~20%) of prey animals [33,122]. Venom may have functions other than aiding in subjugation of prey. For instance, venom may help in antipredator defence or in intraspecific competition, and may have additional roles in digestion, by providing additional enzymatic activity, or in maintaining oral health by including antimicrobial agents [29,30,41,123,124]. Indeed, venoms are likely to be multifunctional in many (perhaps most) species with either the same or different toxins facilitating multiple ecological functions [26,29,30,41,123,125,126], and the range of roles venom may play in varanids and their relative importance remain uncertain. In addition to possible roles in predation, defence or competitor deterrence, the role of varanid oral secretions in aiding digestion (a typical function of vertebrate oral secretions) has previously been put forward as a testable hypothesis [123] and our data appear to support this additional role, at least for certain species. While the venoms of larger species showed evidence of derived activities, so did those of the dwarf species. Dwarf monitors, unique to Australia, grow no larger than 600 mm SVL-chitinase enzymes are likely helpful in the digestion of the thick exoskeletons of the arthropods that are their primary food source. One salient hypothesis concerning effects of varanid lizard bites, such as hyperalgesia, inflammation and prolonged bleeding, is that the toxins (particularly those of dwarf species that are frequently predated upon by pythons) responsible for these effects may also serve defensive roles [10,123]. While kallikrein enzymes produce painful swelling, hyperalgesic cramping is a feature of the AVIT toxins type, which is also consistent with the strong muscular contraction observed for V. varius venom (Figure 17). Painful cramping would also aid in prey subjugation through loss of mobility. Another possible function of the secretions is intraspecific competition, particularly male-male combat, similar to the function of venom in platypus ecology [124]. In both such cases, rapid induction of hypotension by kinin release from kininogen would also be beneficial, thus reinforcing the multifunctional use of this functionally flexible enzyme type. The presence of kallikrein alongside chitinase in the venoms of Odatria suggests that these secretions may serve multiple functional roles for these lizards-multifunctionality of toxic secretions is common in venomous organisms, and where venom is an oral secretion it often preserves its more primitive role in digestion [41]. Some components recovered in the study are very likely to be an adaptation for arthropod-based diets, which strengthens the point that venom glands in varanoid lizards likely have more than a single function. That is, in addition to its potential role in prey subjugation, defence, or intraspecific competition (the classic roles of "venom" sensu stricto [41]), varanoid lizard oral secretions may potentially aid in digestion. According to our results, varanoid lizard venom is largely based on kallikrein toxins that previous studies have shown to be homologous with those present in the venom of advanced snakes [23,24,[27][28][29][30][32][33][34]. Additional components are present in various species with profile complexity seemingly being a function of size and habitat where larger monitors possess the most complex venom and smaller or aquatic species the least. The high level of variability of varanid venoms relative to the high levels of conservation in helodermatid lizards, points to active evolution under selection pressure. Components such as lysosomal acid lipase (Figure 8) are functionally uncharacterised but underscore the rich biodiscovery potential of lizard venoms. As we were able to examine the venoms of only 16 of the more than 60 extant species of varanid lizard, this study is clearly not a comprehensive investigation of the evolution of venom in Varanus, and future studies are very likely to provide additional novel insights. For example, it will be fruitful to investigate the venom profile of V. salvator, a large but predominantly aquatic lizard, and frugivorous species such as V. olivaceus. Nevertheless, we note that turning away from this fruitful area of research by denying the biochemical reality of lizard venom will hinder progress in this fascinating area. For example, because of their chain-selective fibrinogen-degrading activity, kallikrein enzymes from snake venom have been used as anticoagulant treatment of stroke, heart attack, and deep-vein thrombosis [85,127]. The functional diversity of such enzymes in the lizard venoms in this study underscores the rich biodiscovery and therapeutic potential of these novel natural products. Species Studied Australian samples were previously collected at the same time as a transcriptome study [32] under University of Melbourne (2005) approval UM0706247 as part of the long-term cryogenic collection of the Venom Evolution Laboratory, while non-Australian samples were supplied by Alphabiotoxine. Species studied were H. exasperatum (captive bred specimens of unknown founder stock), H. horridum (captive bred specimens of unknown founder stock), H. suspectum (captive bred specimens of unknown founder stock), Lanthanothus borneensis (captive bred specimens of unknown founder stock), Varanus acanthurus (Newman, WA, Australia), V. baritji (Adelaide River, NT, Australia), V. giganteus (Sandstone, WA, Australia), V. gilleni (captive bred specimens of unknown founder stock), V. griseus (captive bred specimen of unknown founder stock), V. jobiensis (captive bred specimens of unknown founder stock), V. melinus (captive bred specimens of unknown founder stock), V. komodoensis (Singapore Zoo captive bred specimens of unknown founder stock), V. melinus (captive bred specimens of unknown founder stock), V. mertensi (Kununurra, WA, Australia), V. prasinus (captive bred specimen of unknown founder stock), V. panoptes rubidus (Sandstone, WA, Australia), V. mitchelli (Kununurra, WA, Australia), V. prasinus (captive bred specimens of unknown founder stock), V. melinus (captive bred specimens of unknown founder stock), V. scalaris (Kununurra, WA, Australia), V. salvadorii (captive bred specimens of unknown founder stock), V. tristis (captive bred specimens of unknown founder stock), and V. varius (Mallacoota, VIC, Australia). 3 adult specimens for each species were pooled to minimize the effects individual variation. In order to remove mucous, all samples were filtered through 0.2 micron syringe filters prior to lyophilisation. Scanning Electron Microscopy Teeth were obtained from museum or frozen specimens and dissected out of both the top and bottom jaw. The largest tooth available (usually the ninth) was selected. All teeth were coated with a 20 nm layer of gold and imaged on a Phillips XL 30 scanning electron microscope. All images were taken at 10 kV and a working distance of 88 mm. These images were used for morphological measurements. To provide a comparative estimate of tooth serrations, the total distance from the tip of the tooth to the base was measured along both the anterior and posterior edge. The distance along this edge showing serrations was recorded, and divided by the total distance to calculate the proportion of the tooth edge that was serrated (% serrations). 1D Gel Electrophoresis In order to establish the proteomics variation, 1D gradient gels were run under both reducing and non-reducing conditions using the manufacturer (BioRad) protocol. Gels were prepared as follows: 0.05 mL deionised H 2 O, 2.5 mL 30% acrylamide mix, 1. . Spreading gel was cast first. After it was set the spacer gel was slowly layered atop of it, and after spacer gel was set the stacking gel was layered atop of it. Running buffers were: 0.2 M Tris-HCl, pH 8.9 (anode buffer); 0.1 M Tris-tricine-HCl pH 8.45. The gels were run at 100 V for three hours at room temperature. 30 µg of venom was reconstituted in Tricine loading buffer (Bio-Rad) with 10 mM DTT added to provide reduced conditions. Gels were stained overnight with colloidal Coomassie brilliant blue G250 (34% methanol, 3% phosphoric acid, 170 g/L ammonium sulphate, 1 g/L Coomassie blue G250). After the staining was complete, water was used to remove excess dye. 2D Gel Electrophoresis In order to further investigate the proteomics variation, particularly that of isoelectric variation, 2D gels were run using protocols previously optimised in the Fry lab [128][129][130]. 0.3 mg of venom sample were solubilized in 125 µL of rehydration buffer (8 M urea, 100 mM DTT, 4% CHAPS, and 0.5% ampholytes (Biolytes pH 3-10, Bio-Rad Lab)) with 0.01% bromophenol blue. The sample was mixed with shaking and centrifuged for 5 min at 4 • C, 14,000 rpm. This was done to remove any insoluble material. The supernatant was loaded onto IEF strips (Bio-Rad ReadyStrip, non-linear pH 3-10, 7 cm and 17 cm IPG) and left overnight for passive rehydration. Protein focussing was achieved via PROTEAN i12 IEF CELL (Bio-Rad Lab). The IEF running conditions were as follows: 100 V for 1 h, 500 V for 1 h, 1000 V for 1 h and 8000 V until 98,400 V/h. Actual current in the final step of the run varied in accordance to resistance. To each strip, a constant current of 50 µA was applied. After the run, IPG strips were incubated for 10 min in a reducing equilibration buffer (50 mM Tris-HCl, pH 8.8, 6 M urea, 2% SDS, 30% glycerol, 2% DTT) to reduce cysteine bonds. To alkylate reduced bonds, IPG strips were further incubated for 20 min in an alkylating equilibration buffer (50 mM Tris-HCl, pH 8.8, 6 M urea, 2% SDS, 30% glycerol, 2.5% iodoacetamide). After rinsing with SDS-PAGE running buffer, IPG strips were positioned on top of 12% polyacrylamide gels (Protean-II Plus, 18 × 20 cm, Bio-Rad Lab) using 0.5% agarose. Gels were run at 4 • C with a current of 10 mA per gel for 20 min followed by 20 mA per gel for the rest of the run until the bromophenol dye front was within 0.5 cm of the base of the gel. After the run, gels were briefly washed with water and stained with 0.2% colloidal Coomassie brilliant blue G250 overnight. Water was used to remove the excess of the dye after staining was complete. Visible spots were subsequently picked from gels and digested overnight at 37 • C with the use of sequencing grade trypsin (Sigma-Aldrich). Afterwards gel spots were washed with deionised H 2 O, destained (40 mM NH 4 CO 3 /50% acetonitrile (ACN)) and dehydrated (100% ACN), rehydrated in 10 µL of 20 µg/mL TPCK trypsin, and incubated at 37 • C overnight. To elute peptides, the following solutions were used per each gel spot: 20 µL of 1% formic acid (FA), followed by 20 µL of 5% ACN/0.5% FA. Collected peptides were put into MS vials and subjected to LC-MS/MS analysis. Shotgun Sequencing In order to identify low molecular weight peptides that do not resolve well on 1D or 2D gels, shotgun sequencing was used. 3 µg of crude venom sample was dissolved in 50 µL of 100 mM ammonium carbonate to reduce and alkylate cysteine bonds with subsequent addition of 50 µL of 2% iodoethanol/0.5% triethylphosphine in acetonitrile. The sample was afterwards resuspended in 20 µL of 40 mM ammonium bicarbonate before overnight incubation (at 37 • C) with 750 ng of sequencing grade trypsin (Sigma-Aldrich). To stop digestion 1 µL of concentrated formic acid was added to each of the samples. Samples were lyophilised then resuspended in 20 µL of 5% ACN/0.5% FA, put into MS vials and subjected to LC-MS/MS analysis. LC-MS/MS In order to identify the toxin types present, digested gel spots and digested whole venom (shotgun) samples were processed using an Agilent Zorbax stable bond C18 column (Agilent, Santa Clara, CA, USA) (2.1 mm by 100 mm, 1.8 µm particle size, 300 Å pore size) at a flow rate of 400 µL per minute and a gradient of 1-40% solvent B (90% acetonitrile, 0.1% formic acid) in 0.1% formic acid over 15 min or 4 min for shotgun samples and 2D-gel spots respectively on a Shimadzu Nexera UHPLC (Kyoto, Japan) coupled with an SCIEX 5600 Triple TOF mass spectrometer (Framingham, MA, USA). MS2 spectra are acquired at a rate of 20 scans per second with a cycle time of 2.3 s and optimised for high resolution. Precursor ions were selected between 80 and 1800 m/z with a charge state of 2-5 and of an intensity of at least 120 counts per second with a precursor selection window of 1.5 Da. The isotopes within 2 Da were excluded for MS2. MS2 spectra were searched against known translated transcriptome libraries or UniProt database with Proteinpilot v4.0 (Sciex Framingham, MA, USA) using a thorough identification search, specifying iodoacetamide as an alkylation method, trypsin digestion and allowing for biological and chemical modifications (ethanolyl C or deamidated N in particular) and amino acid substitutions, including artefacts induced by the preparation or analysis processes. This was done to maximize the identification of protein sequences. In order to remove false positives, only 95% confidence value hits were examined. RDES0011 Substrate A working stock solution of freeze dried venom was reconstituted in a buffer containing 50% deionised H 2 O/50% glycerol (>99.9%, Sigma-Alrich) at a 1:1 ratio to preserve enzymatic activity and reduce enzyme degradation with the final venom concentration of 0.1 mg/mL, and then stored at Phsopholipase A 2 Activity We assessed the continuous PLA 2 activity of the venoms using a fluorescence substrate assay (EnzChek ® Phospholipase A 2 Assay Kit) (ThermoFisher Scientific, Sydney, Australia). A working stock solution of freeze dried venom was reconstituted in a buffer containing 50% deionised H 2 O/50% glycerol (99.9%, Sigma) at a 1:1 ratio to preserve enzymatic activity and reduce enzyme degradation with the final venom concentration of 0.1 mg/mL, and then stored at −20 • C. Venom solution (0.1 µg in dry venom weight) was brought up to 12.5 µL in 1× PLA 2 reaction buffer (250 mM Tris-HCL, 500 mM NaCl, 5 mM CaCl 2 , pH 8.9) and plated out in triplicates on a 384 well plate. Triplicates were measured by adding 12.5 µL quenched 1 mM EnzChek ® Phospholipase A 2 substrate per well (total volume 25 µL/well) over 100 cycles at an excitation of 485 nm and emission of 520 nm, using a Fluoroskan Ascent (ThermoFisher Scientific). The negative control consisted of PLA 2 reaction buffer and substrate only. Rat ileum Organ Bath Testings The rat ileum muscle preparations were isolated from adult male rats. The rats were euthanised by CO 2 asphyxiation. The isolated preparations were individually mounted in 15 mL parallel organ baths containing a Krebs solution with the following constituents (mM): NaCl, 118.4; KCl, 4.7; MgSO 4 , 1.2; KH 2 PO 4 , 1.2; CaCl 2 , 2.5; NaHCO 3 , 25 and glucose, 11.1. The Krebs solution was continuously bubbled with carbogen (95% O 2 and 5% CO 2 ) to maintain a pH between 7.2-7.4 at a temperature of 32-34 • C. A resting tension between 1 and 3 g was found to be the optimal starting baseline. Stimulation was performed with 50 µg/mL of crude venom; deionised H 2 O (170 µL) was used as a control. Human fibrinogen was reconstituted to a concentration of 2 mg/mL in isotonic saline solution, flash frozen in liquid nitrogen and stored at −80 • C until use. Freeze-dried venom was reconstituted in deionised H 2 O and concentrations were measured using a Thermo Fisher Scientific™ NanoDrop 2000 (Waltham, MA, USA). Assay concentrations were a 1:10 ratio of venom:fibrinogen, in comparison to 1:5 ratios used in snake venom testing [131]). The following was conducted in triplicate for each venom: Five "secondary" aliquots containing 10 µL buffer (5 µL of 4× Laemmli sample buffer (Bio-Rad, Hercules, CA, USA), 5 µL deionised H 2 O, 100 mM DTT (Sigma-Aldrich, St. Louis, MO, USA)) were prepared. A "primary" aliquot of fibrinogen (volume/concentration as per the above) was warmed to 37 • C in an incubator. 10 µL was removed from the primary aliquot ("0 min incubation" fibrinogen control) and added to a secondary aliquot, pipette mixed, and boiled at 100 • C for 4 min. Assay concentrations were a 1:10 ratio of venom:fibrinogen, in comparison to 1:5 ratios used in snake venom testing [131]). The following was conducted in triplicate for each venom: Five "secondary" aliquots containing 10 µL buffer (5 µL of 4× Laemmli sample buffer (Bio-Rad, Hercules, CA, USA), 5 µL deionised H 2 O, 100 mM DTT (Sigma-Aldrich, St. Louis, MO, USA)) were prepared. A "primary" aliquot of fibrinogen (volume/concentration as per the above) was warmed to 37 • C in an incubator. 10 µL was removed from the primary aliquot ("0 min incubation" fibrinogen control) and added to a secondary aliquot, pipette mixed, and boiled at 100 • C for 4 min. 4 µg (dry weight) of venom was then added to the primary aliquot of fibrinogen (amounting to 0.1 mg/mL of venom and 1 mg/mL of fibrinogen in 40 µL total volume), pipette mixed, and immediately returned to the incubator. At each incubation time period (1 min, 5 min, 20 min, and 60 min), 10 µL was taken from the primary aliquot, added to a secondary aliquot, pipette mixed, and boiled at 100 • C for 4 min. At each incubation time period (1 min, 5 min, 20 min, and 60 min), 10 µL was taken from the primary aliquot, added to a secondary aliquot, pipette mixed, and boiled at 100 • C for 4 min. The secondary aliquots were then loaded into the gels and were run in 1× gel running buffer at room temperature for 20 min at 90 V (Mini Protean3 power-pack from Bio-Rad, Hercules, CA, USA) and then 120 V until the dye front neared the bottom of the gel. Gels were stained with colloidal coomassie brilliant blue G250 (34% methanol (VWR Chemicals, Tingalpa, QLD, Australia), 3% orthophosphoric acid (Merck, Darmstadt, Germany), 170 g/L ammonium sulfate (Bio-Rad, Hercules, CA, USA), 1 g/L coomassie blue G250 (Bio-Rad, Hercules, CA, USA), and destained in deionised H 2 O. Phylogenetic Comparative Analyses A phylogeny was assembled using previous genetic studies [65][66][67] and was used for all further analyses conducted in R version 3.2.5 [132] using the ape package [133] for general handling of phylogenetic and trait data. Ancestral states were estimated and reconstructed over the tree in order to investigate the evolutionary history of the traits and consequently their relation to one another over time. The continuous functional traits were reconstructed by maximum likelihood in the contMap function in phytools [134]. We then fit PGLS models [135] in caper [136] to test for relationships. Phylogenetic Reconstruction Kallikrein, lysosomal acid lipase and chitinase datasets were analysed using Bayesian inference implemented on MrBayes, version 3.2.1 using lset rates = invgamma with prset aamodelpr = mixed, which enables the program to optimize between nine different amino acid substitution matrices. The analysis was performed by running a minimum of 10 million generations in four chains, and saving every 100th tree. The log-likelihood score of each saved tree was plotted against the number of generations to establish the point at which the log likelihood scores reached their asymptote, and the posterior probabilities for clades established by constructing a majority-rule consensus tree for all trees generated after completion of the burn-in phase. Molecular Modelling Publicly available kallikrein sequences were retrieved from GenBank by using Varanus komodoensis kallikrein sequence as a query for a BLAST search within Anguimorpha [137][138][139]. Sequences with less than 70% coverage were discarded. Sequences were edited to only include the codons for the mature protein using AliView and aligned by using AliView's "Realign everything as translated amino acids" to translate the codons, align the resulting amino acids using MUSCLE, and reverse translating them [140,141]. MrBayes version 3.2 was used to create a phylogenetic tree of the sequences for performing the later selection analyses [142]. Using the closest sequence similarity reptile venom structure (GU441485) as input, the Phyre2 webserver generated a custom protein structure based on the published structure (PDB ID: 3S9C) [143]. This structure has a resolution of 1.8 Å and was 41% identical with the query sequence and with conservation of the cysteine residues. Protein models were rendered in UCSF Chimera version 1.10.2 [144]. Conservation scores were calculated using the UCSF Chimera implementation of AL2CO under the default settings [144,145]. Tests for selection were performed using HyPhy version 2.220150316: overall dN/dS value was calculated using the AnalyzeCodonData method, persistent site-by-site selection was analyzed with the FUBAR method, and episodic site-by-site selection was analyzed with the MEME method [146][147][148]. MEME is used for identifying sites that experience episodic selection pressures, where as FUBAR is improvement on site-wide selection analysis [148].
Cost burden and net monetary benefit loss of neonatal hypoglycaemia Background Neonatal hypoglycaemia is a common but treatable metabolic disorder that affects newborn infants and which, if not identified and treated adequately, may result in neurological sequelae that persist for the lifetime of the patient. The long-term financial and quality-of-life burden of neonatal hypoglycaemia has not been previously examined. Methods We assessed the postnatal hospital and long-term costs associated with neonatal hypoglycaemia over 80 year and 18 year time horizons, using a health-system perspective and assessing impact on quality of life using quality-adjusted life year (QALYs). A decision analytic model was used to represent key outcomes in the presence and absence of neonatal hypoglycaemia. Results The chance of developing one of the outcomes of neonatal hypoglycaemia in our model (cerebral palsy, learning disabilities, seizures, vision disorders) was 24.03% in subjects who experienced neonatal hypoglycaemia and 3.56% in those who do did not. Over an 80 year time horizon a subject who experienced neonatal hypoglycaemia had a combined hospital and post-discharge cost of NZ$72,000 due to the outcomes modelled, which is NZ$66,000 greater than a subject without neonatal hypoglycaemia. The net monetary benefit lost due to neonatal hypoglycaemia, using a value per QALY of NZ$43,000, is NZ$180,000 over an 80 year time horizon. Conclusions Even under the most conservative of estimates, neonatal hypoglycaemia contributes a significant financial burden to the health system both during childhood and over a lifetime. The combination of direct costs and loss of quality of life due to neonatal hypoglycaemia means that this condition warrants further research to focus on prevention and effective treatment. Supplementary Information The online version contains supplementary material available at 10.1186/s12913-021-06098-9. Background Neonatal hypoglycaemia is a common but treatable metabolic disorder that affects newborn infants, most often in the first 24 h after birth. It is typically asymptomatic, and if not identified and treated adequately, may result in neurological sequelae that persist for the lifetime of the patient [1]. The overall incidence is estimated to be up to 15% of all infants, and 50% in infants with risk factors such as being born small, large, preterm, or to a mother with diabetes [2,3]. Although severe symptomatic neonatal hypoglycaemia has been recognised since 1937 [4], controversy and knowledge gaps in understanding this condition persist, particularly pertaining to its definition, the degree and duration of hypoglycaemia that may result in complications [3], and the risk of complications associated with asymptomatic disease [5]. There is also great variation in definitions of outcomes, tools for assessing the presence and severity of outcomes, the age at which assessments are made, and the characteristics of populations in which the outcomes are measured. Short-term costs have been described previously for infants at increased risk of neonatal hypoglycaemia [6], but there remains a paucity of high quality prospective evidence examining the post-discharge outcomes of neonatal hypoglycaemia [5,7], and their costs. We have undertaken an economic analysis to compare the costs and utilities for subjects who experienced neonatal hypoglycaemia and those who did not, with the objective of quantifying the total cost burden due to neonatal hypoglycaemia, and, via net monetary benefit loss estimations, an indication of the impact of longer term outcomes useful for future economic evaluations of preventative treatments. Methods We assessed the postnatal hospital and long-term costs associated with neonatal hypoglycaemia over 80 year and 18 year time horizons, and assessing impact on quality of life using quality-adjusted life year (QALYs), from the perspective of the New Zealand healthcare system, where health and disability services, including inpatient and outpatient public hospital and primary care services are funded or subsidised by the government [8]. A decision analytic model was used to represent key outcomes in the presence and absence of neonatal hypoglycaemia. Classification of outcomes In order to determine the outcomes of neonatal hypoglycaemia and their respective probabilities (prevalences) we searched Medline, EMBASE, and CINAHL databases combining: 1) the diagnosis of neonatal hypoglycaemia with; 2) previously reported neurodevelopmental or neurological outcomes of neonatal hypoglycaemia or the standardised assessment tools used to identify them; or 3) Subject Heading Terms for outcome measures, quality of life measures, outcome assessments, or health status indicators (Additional file 1). Publications cited within the identified studies were also reviewed. Our initial literature search, including hand searching, yielded 2530 reports, of which 2446 were excluded on title and abstract searching, and the remaining 84 studies were used to identify outcomes related to neonatal hypoglycaemia, including candidate clinical outcomes for inclusion in our model (Table 1). Of these, 43 studies reported the probabilities of at least one outcome, or a probability could be readily calculated from a relative Cognitive dysfunction [10] Impaired perceptive performance [11] Cognitive delay [12] Language Verbal skills delay [9] Speech language delay [12] Motor Impaired coordination/motricity [11] Cerebral palsy [12] Social-Emotional Hyperactivity and inattention [13] Adaptive Behaviour Impairment of adaptability and motivation [14] Executive Function Impairment of recognition memory [15] Working memory deficits [16,17] Impairment of explicit memory (recall) after a delay [18] Growth Lower body weight [9] Suboptimal head growth [9,12] Visual Occipital lobe injury (MRI) [19,20] Blindness or impaired visual acuity [12] Other specific visual impairment, including squint, visual field defect, cortical visual impairment, immature visual attention and tracking, visuo-spatial difficulties [12] Hearing Deafness or impaired hearing Neurological White matter abnormalities [12] Seizures/epilepsy [12,21] risk or absolute number . Thirty-five studies were excluded, 22 because they reported populations with significant confounders or comorbidities, small populations, or did not include sufficient information to calculate prevalence in the hypoglycaemic subgroup. A further 13 studies were excluded because of a high risk of bias. This was independently assessed by two authors, using the Joanna Briggs Institute Checklist for Prevalence Studies [22], and converted to a numeric risk-ofbias score based on the ratio of checklist responses indicating high risk of bias to those indicating low risk of bias, excluding those not applicable to the publication being considered. The remaining 8 publications with a score < 50% were considered at low risk of bias and included in our analysis (Additional file 2). Some publications contributed more than one prevalence value per outcome due to, for instance, different cohorts or different outcome subsets. If no published reports describing prevalence were found in our search, that outcome was not included in our model. This resulted in a final list of five key outcomes with prevalence data able to be included in our model: Cerebral palsy [1,23,24] Learning disabilities (mild-moderate learning disorders, language development disorders, intellectual disability) [1,[23][24][25][26][27][28] Severe learning disabilities (severe or global developmental delay) [23] Epilepsy (seizures beyond those during the initial episodes of hypoglycaemia) [23,29] Vision disorders (including blindness and central processing disorders) [23] For intellectual and/or learning disabilities, we categorised mild-to-moderate intellectual disability as IQ of 70-85, or a description of functional level implying an IQ in that range (e.g., possibly requiring educational support during school age, but able to live independently and perform activities of daily living without ongoing support). We categorised severe intellectual disability as an IQ < 70, described as having severe or profound learning or intellectual disabilities, or requiring full or part time homecare support for supervision, assistance with self-care or communication. Two studies were excluded as they report outcome prevalences at 2 years of age in cohorts that overlap with that reported by McKinlay et al. [23] at 4.5 years of age (McKinlay et al. 2015 [30], Harris et al. [31]). Data from the older age were selected in order to capture morbidities, such as some learning disabilities, which are less reliably assessed at a younger age. The weighted mean prevalence for each outcome was calculated as the sum of all qualifying cases across all included studies divided by the sum of the total population across all included studies. This varied from 2 Cases within a population of 270 for vision disorders through to 7604 Cases within a population of 1,421,813 for epilepsy ( Table 2). The size of the combined population, and overall number of cases, informed the parameters used to represent the beta distribution of these prevalences in our stochastic analysis. Since individuals can have more than one outcome of interest, we examined the original data from two studies that have reported the outcomes included in our model in children with increased risk for hypoglycaemia (the Children With Hypoglycaemia and Their Later Development [CHYLD] Study [23] and the Protein, Insulin, and Neonatal Outcomes [PIANO] Study [28]. In the CHYLD + PIANO cohorts, the prevalence of any multiple-issue health state (i.e., two or more concurrent morbidities) was 2.59%. Not all combinations of outcomes occurred in these cohorts. The combinations of cerebral palsy with learning disorders (any severity), and blindness/vision disorders with learning disorders (any severity) each occurred with a higher frequency than expected by chance (Fisher's Exact Test 2-sided p values of 0.001 and 0.004 respectively). Within our analysis, however, estimates of mean prevalence for different outcomes are treated as independent due to data limitations (including the low expected counts for most outcomes). Prevalences of the outcomes in the general population, independent of neonatal hypoglycaemia status, were sought using similar strategies for outcomes (Additional file 1) and costs (Additional file 3). Large meta-analyses were selected to determine the overall prevalences for cerebral palsy [32], epilepsy [33], intellectual disability [34], and vision impairment [35]. Costs We searched Medline, EMBASE, and CINAHL for published direct medical costs associated with the selected outcomes, regardless of aetiology. We considered studies for inclusion if they reported a standard deviation or confidence interval for costs, and provided transparent estimates of included cost components and sample size. We made the assumption that costs for an outcome were independent of the aetiology of that outcome. For post-discharge costs, reports from Australia or New Zealand populations were prioritised, with other geographical populations included in the absence of Australasian data. For patients with cerebral palsy and learning disabilities, we used estimates that included total health expenditure (inpatient costs, outpatient costs, medication costs) from the United States [36]. Australian costs were used for patients with learning disabilities [37], epilepsy [38], and visual impairment [39,40] (Table 3). Annualised costs were converted from published currencies to NZ$ and US$ using purchasing power parities (PPP) [41] and then corrected for inflation to 2018 levels (end of second quarter) using the Personal Consumption Expenditures (PCE) health-by-function index [42], which includes out-of-pocket health expenditure and personal consumption of health services paid on behalf by third party payers [43] (Table 3). Costs as used as input parameters with their respective distributions and distribution parameters are shown in Additional file 4. The costs calculated by Doran et al. [37] were all considered to relate to severe intellectual disability. Kancherla et al. [36] estimated costs separately for cerebral palsy with and without intellectual disability. However, their distinctions between levels of severity of intellectual disability and their exclusion of learning disorders mean their definitions of mild, moderate, and severe cases are not well aligned with those in our model, and thus we have used their cost estimates for cerebral palsy without intellectual disability only. Because our definition of mild-to-moderate intellectual disability/learning disorders describes subjects who may need additional educational support but who are unlikely to incur medical costs beyond those of the general population, we did not attribute any direct health-related costs to this group. The definition of vision disorder was visual acuity < 6/ 12 of any aetiology [39,40]. The populations considered for assessing cost of visual disorders included all ages, including patients with age-related visual problems. The overall lifetime cost was considered to be the sum of the initial postnatal hospital costs, and the cumulative annual total post-discharge healthcare expenditure specifically for each outcome over the time horizons of the analysis, discounted at 3.5% [44,45] for costs incurred in timeframes greater than 1 year. Postnatal hospital costs were based on the lengths-of-stay in a general postnatal ward and a neonatal intensive care unit (NICU), and their costs, as described previously [6], and were converted and inflated to 2018 NZ$ using the methods outlined above. The average cost of a postnatal hospital stay used for an infant with neonatal hypoglycaemia was NZ$7500, and for an infant without neonatal hypoglycaemia was NZ$1100. Utility weights For the base analysis, we used the catalogue of Kwon et al. [46] (Table 4), and for sensitivity analyses we used $5377 [40] To convert NZ$ to US$ multiply by 0.6938 the published paediatric condition utility weight catalogues of Petrou and Kupek [47] and Carroll and Downs [48] (Additional file 5). Utility weights were discounted at the same rate and in the same manner as for costs. Few of the outcomes reported are mutually exclusive, and disabilities can occur together. For the modelled scenarios involving comorbid outcomes, the utility of the most severe component (i.e., the outcome with the lowest utility) was used to determine impact on quality of life. Because the utility rank order of the outcomes differed between the different utility weight sets used in our sensitivity analyses, this meant that the outcome selected to represent the utility of the most severe component of comorbid outcomes sometimes differed between the base case and sensitivity analyses. For utility weight sets that provided more than one utility weight value for a specified outcome (e.g., with separations based on severity), a mean value was used. Analysis The analysis considered all 24 possible combinations of outcomes i.e. as cerebral palsy (yes/no), epilepsy (yes/ no), vision disorders (yes/no) and learning disabilities (severe, mild-moderate, and none) ( Table 5). The costs and utility/QALYs associated with hypoglycaemia were calculated as a weighted sum of all these possibilities. We calculated net monetary benefits (NMB) for the subjects with and without neonatal hypoglycaemia, and the net monetary benefit loss due to hypoglycaemia as the difference between these two values, using values per quality-adjusted life year (λ) of NZ$43,000 and NZ$14,000. We conducted a stochastic analysis using 100,000 runs drawing from the estimated distributions of input parameters (beta distributions for prevalence and utility values, lognormal distribution for costs). Credible intervals were calculated for the cost differences and net monetary benefit lost due to neonatal hypoglycaemia, as the 2.5 and 97.5% percentiles for those parameters across the 100,000 runs of the stochastic analysis using the PERCENTILE.EXC function of Microsoft Excel. For input parameters where the standard deviation was not reported in, or able to be calculated from, the source material, their relationship between the expected value and standard deviation was presumed to be comparable to the other input parameters of the same type. We conducted the following one-way sensitivity analyses: substituting the alternative catalogues of utilities [47,48] for childhood diseases substituting a multiplicative method to estimate the utility values in multiple health state outcomes discount rates of 0 and 5% calculating the costs of multiple health state outcomes using the sum of the costs of all included outcomes using only the lowest published prevalence for each major outcome using prevalences for vision disorder and epilepsy after neonatal hypoglycaemia that are equivalent to the respective prevalences in the population without neonatal hypoglycaemia. We estimated the financial implications of neonatal hypoglycaemia, in terms of the healthcare costs difference and the net monetary benefit lost due to neonatal hypoglycaemia, for the New Zealand population, and extrapolated to the United States population, in their respective 2018 currencies (to convert NZ$ to US$ multiply by by 0.6938). These estimates were based on an incidence of neonatal hypoglycaemia (< 2.6 mmol/L) of 15.3% (30% of all infants born at increased risk and 51% of these experiencing neonatal hypoglycaemia [2]). Base analysis In our base analysis, the chance of developing one of the outcomes in our model was 24.03% in subjects who had experienced neonatal hypoglycaemia and 3.56% in those who had not (Additional file 6). Over an 80 year time horizon a subject who had experienced neonatal hypoglycaemia had a combined discounted hospital and post-discharge cost of NZ$72,000, which is NZ$66,000 greater than a subject without neonatal hypoglycaemia (Table 6). However, there is significant uncertainty in this cost difference, with the 95% credible interval estimated in our stochastic analysis spanning NZ$8800-300,000 (Fig. 1). Over the first 18 years of life, the cost difference between a subject with and without neonatal hypoglycaemia is NZ$36,000, (Table 7) and spans a 95% credible interval of NZ$7600-150,000. In addition to these cost differences, neonatal hypoglycaemia also leads to health losses. If the health lost due to neonatal hypoglycaemia is valued at NZ$43, 000 per QALY and added to the cost impacts, then the expected net monetary loss due to neonatal hypoglycaemia is around NZ$190,000 over an 80 year time horizon. In New Zealand, the national pharmaceutical agency (PHARMAC) does not specify a cost threshold per QALY for determining if an intervention is costeffective [44]. If the health lost due to neonatal hypoglycaemia is valued at the higher level of NZ$72, 000 per QALY, the expected net monetary loss due to neonatal hypoglycaemia is around NZ$260,000. This is the value to avoiding an additional case of neonatal hypoglycaemia and can guide the evaluation of treatments addressing these risks. Figure 2 illustrates the uncertainty around this figure, and shows a 95% credible interval of NZ$110,000-420,000. Sensitivity analyses The mean net monetary benefit loss attributable to neonatal hypoglycaemia was not greatly affected by using different catalogues of utility values for childhood diseases, or using the approach of multiplying relevant utility values (Tables 6 and 7). One-way sensitivity analyses that employed 0 and 5% discount rates for costs and utilities altered the mean net monetary benefit loss due to neonatal hypoglycaemia to NZ$510,000 and NZ$140,000 respectively. The conservative approach of using only the lowest outcome prevalences reduced the mean loss to NZ$51,000 over an 80 year time horizon with a λ of NZ$43,000, but even over the 18 year time horizon with a λ of NZ$14,000, the net monetary loss of neonatal hypoglycaemia persisted, at NZ$20,000. National costs of neonatal hypoglycaemia In New Zealand, where the study of dextrose gel prophylaxis was undertaken [49], there are approximately 58, 000 live births per year [50]. This equates to an estimated 8874 cases of neonatal hypoglycaemia per year, with an associated cost of NZ$590,000,000. Thus, a prophylactic strategy that achieved a 21% reduction in cases of neonatal hypoglycaemia would result in an 80year cost saving of NZ$120,000,000, or a net monetary benefit saving of NZ$320,000,000 over an 80 year time horizon. In the United States there are approximately 3,855,500 live births [51] and an estimated 589,892 cases of neonatal hypoglycaemia per year, costing US$27,000,000,000 annually. In a study of hypoglycaemia prevention with dextrose gel, the relative risk of hypoglycaemia was 0.79 compared with placebo [49]. Although we note the differences in the structure of the health systems between the two countries, in the United States, a 21% reduction in cases would therefore result in an 80-year cost saving of approximately US$5,400,000,000, or a net monetary benefit saving of US$15,000,000,000 over an 80 year time horizon. Discussion Neonatal hypoglycaemia is a common condition that affects up to 15% of all newborns. Both the healthcarerelated costs of, and impact on quality of life due to, the long-term outcomes of neonatal hypoglycaemia accrue over the lifetime of the subject. A paucity of data pertaining to the post-discharge outcomes of neonatal hypoglycaemia [5] has meant that quantification of these burdens is difficult, and there have been calls for welldesigned studies to examine the association between neonatal hypoglycaemia and long-term neurodevelopmental outcomes [5,52]. Importantly, the economic impact of the long-term outcomes of neonatal hypoglycaemia also have not previously been investigated. We have used currently available data to estimate the cost difference between subjects with and without neonatal hypoglycaemia, and the net monetary benefit lost, which includes an estimate of the impact on quality of life attributable to neonatal hypoglycaemia. We estimated that the cost difference between an infant who develops neonatal hypoglycaemia and one who does not is NZ$66,000 over an 80 year time horizon, with NZ$36,000 of this attributable within the first 18 years. The net monetary benefit lost due to neonatal hypoglycaemia, which reflects the level the healthcare system would be willing to pay to prevent cases, using a willingness-to-pay value of NZ$43,000 per qualityadjusted life year, is NZ$180,000 per patient over an 80 year time horizon, and NZ$92,000 per patient over an 18 year time horizon. The bulk of this cost is accrued after discharge from the initial post-natal hospital stay for both time horizon cost calculations. In New Zealand, and by extrapolation in the United States, these accumulate to significant national costs and net monetary benefit losses due to neonatal hypoglycaemia over the lifetime of the patient. Prevention of this condition is difficult, but early feeding is recommended, and buccal dextrose gel prophylaxis looks promising [49]. Our data suggest that a prophylactic strategy that achieved a reduction of even a modest proportion of cases would result in substantial cost savings and quality of life improvements in the population. To the best of our knowledge, this is the first economic analysis of the long-term outcomes of neonatal hypoglycaemia. Strengths of our study include the use of standard decision analysis modelling methodologies, and systematic literature reviews to determine the input parameters of our model. In particular, a systematic approach was used to select studies reporting the prevalences of outcomes of neonatal hypoglycaemia. Further, in order to reflect the uncertainty and broad distributions of input parameters in our model, we have performed our analysis using a conservative approach. The sensitivity analyses used variations in the source and methods associated with the input parameters, including very conservative analyses using minimum values for the outcome prevalences, and an upper range for discount rate of 5% per annum. In addition, we have focused only on the direct costs to the healthcare system. The inclusion of other societal costs, particularly those borne by the education system or in the form of other government-funded support, and families of affected individuals, and indirect costs, would increase the overall financial costs of this condition. Our model incorporates a number of simplifications to overcome data limitations, particularly pertaining to the prevalence of outcomes. The outcomes we incorporated into our model are limited to those for which prevalence data were available, and for which impact on quality of life can be represented by utility weights available in the selected paediatric utility weight catalogues. The prevalence values we selected for inclusion span a fairly wide distribution for each outcome, despite selection on the basis of low risk of bias. This is predominantly due to the small study populations and numbers of cases, with the exception of data pertaining to epilepsy [29]. The exclusion of outcomes such as decreased body weight, suboptimal head growth during infancy, and radiological findings such as white matter abnormalities observed by MRI scanning, will result in conservative cost and utility estimations. Ongoing long-term clinical studies investigating the relationship between the severity and frequency of neonatal hypoglycaemia and subsequent neurodevelopmental outcomes [23] will contribute to more accurate estimations of the prevalence of such complications, and data that can ultimately be incorporated into future iterations of economic analyses of neonatal hypoglycaemia. Specific challenges were encountered in estimating cost parameters for our model, including that it was necessary to combine sources across countries, despite acknowledged differences in approaches to healthcare funding and payment. Notably challenging were the limitations in estimating costs of learning disabilities in children, including carer benefits/opportunity lost, the costs borne by the education system, and the affected individual's capacity to earn income thereafter. Our model therefore excludes indirect costs, and costs outside of the healthcare system. In many instances, particularly for mild or moderate learning disabilities, there may be negligible or very little additional cost to the healthcare system (and for this reason, the costs for these were set to zero in the model), with the majority of financial impact coming in the form of supplementary or specialised teaching support financed by the education sector or privately by family or caregivers. The implication of this approach is that, by focusing on healthcare system costs, our model considerably underestimates the overall societal costs. Further complicating the estimation of the costs of all severities of learning disabilities, as for many other conditions of childhood, is the fact that direct costs are predominantly encountered earlier in life, although indirect costs and opportunity costs may manifest during adulthood. In our model, we have employed flat cost input parameters across the lifespan, but have presented results for both 18 year (childhood) and 80 year (lifetime) time horizons. In our base analysis, and in all sensitivity analyses that used a discount rate of 3.5%, the 80 year time horizon cost difference and the net monetary benefit loss is approximately double that of the 18 year time horizon. Thus, the application of discounting means that subjects in our model encounter more of their overall healthcare costs earlier. This also reflects the reality that a larger proportion of overall healthcare costs for childhood conditions may occur early, although we note that the extent to which later costs for pharmaceutical therapy, and ongoing outpatient follow-up and hospital treatment, span a wide range. Although we used existing catalogues of utility values for childhood conditions, it is worth noting that quality of life indices are more challenging to determine accurately in the paediatric population than in adult populations. Reasons for this include, but are not limited to, the frequent requirement to use a proxy respondent (parent or caregiver) to determine impact [53,54], rapid developmental changes affecting the relevance of health status indicators across age ranges and developmental states [54], and a lack of validated multi-attribute utility instruments for the very young (< 5 years of age) [53]. We modelled the outcomes present in comorbid states as being independent. Our calculated proportions of comorbidities approximates those of other reports of the prevalence of comorbid childhood chronic conditions, where estimates have been made that fewer than 5% of children younger than 18 years have two or more chronic conditions, and fewer than 1% have three or more chronic conditions [55]. The ratio of comorbid outcomes to single-health-state outcomes is thus relatively small, reducing the impact of uncertainties in estimation of probabilities and costs, and in the uncertainties introduced by the use of a multiplicative approach to calculating the combined prevalence. Similarly, as the number of comorbidities increases, cumulative deteriorations in health status measures will be observed. Although a number of approaches have been proposed for estimation of the utility of joint health states [56][57][58], there is no gold standard for their derivation from single health-state utilities [57]. When modelling the utility of comorbidities in our event tree, in the absence of utility data for specific combinations of chronic conditions [57], particularly those manifesting during childhood, we used the utility of the most severe component (i.e., a "minimum estimation" approach), rather than applying a multiplicative model, as the former has been demonstrated to provide a more accurate estimation [59]. We included the latter method as a sensitivity analysis in order to assess the impact of more conservative multiple health state outcome utility values, and found little impact on the overall cost differences or net monetary benefit loss due to neonatal hypoglycaemia. Learning disabilities and developmental delay, in particular, as comorbid health states, generally increase in prevalence as the number of other chronic conditions increases [55], and can be proportional in severity to the accumulated health-burden-over-time of the accompanying other chronic childhood conditions [60]. Although the utilisation of health services increases under these circumstances, these subjects are often represented within the distributions of costs, particularly when estimations have been made by analysing third-party payment systems [61]. This is in part due to the fact that, in the United States, the costs associated with intellectual disability are not necessarily coded in Medicaid claims unless this has a direct impact on the primary diagnosis [36]. Kancherla et al. [36] sought to resolve costs in a more granular manner by separating out their cost estimates of cerebral palsy with and without intellectual disability, but noted that under-diagnosis of intellectual disability may mean that children with severe intellectual disability are overrepresented in the cohorts, resulting in an overestimation of the cost of intellectual disability that co-occurs with chronic conditions such as cerebral palsy [36]. Although some of the clinical outcomes of neonatal hypoglycaemia may have an impact on lifespan, we have not explicitly modelled this. Patients with intellectual disability form the largest group of individuals with negative clinical outcomes due to neonatal hypoglycaemia within our model. No difference in mortality was observed in a large, 35-year population-based cohort study of persons with intellectual disability [62]. The impact of any premature mortality due to other neonatal hypoglycaemia-related outcomes, such as epilepsy [63], which is more likely to be evident over the 80 year time horizon than the 18 year time horizon, is mitigated by discounting, wherein long-term costs are borne early, with late costs being devalued cumulatively. We have sought to mitigate these limitations and challenges by incorporating the wide distributions of the cost, prevalence, and utility input parameters into stochastic versions of our model, and by undertaking sensitivity analyses that were intentionally conservative. Conclusions The long-term financial and quality-of-life burden of neonatal hypoglycaemia has not been previously examined. We have analysed the impact of the long-term outcomes of neonatal hypoglycaemia using a decision analytic model. Even under the most conservative of conditions, our estimation of the cost of neonatal hypoglycaemia both over childhood and over a lifetime shows that neonatal hypoglycaemia contributes a significant financial burden to the health system. The combination of direct costs and loss of quality of life due to neonatal hypoglycaemia means that this condition warrants further research to focus on prevention and effective treatment.
The Epoch of Reionization Window: I. Mathematical Formalism The 21 cm line provides a powerful probe of astrophysics and cosmology at high redshifts, but unlocking the potential of this probe requires the robust mitigation of foreground contaminants that are typically several orders of magnitude brighter than the cosmological signal. Recent simulations and observations have shown that the smooth spectral structure of foregrounds combines with instrument chromaticity to contaminate a"wedge"-shaped region in cylindrical Fourier space. While previous efforts have explored the suppression of foregrounds within this wedge, as well as the avoidance of this highly contaminated region, all such efforts have neglected a rigorous examination of the error statistics associated with the wedge. Using a quadratic estimator formalism applied to the interferometric measurement equation, we provide a framework for such a rigorous analysis (incorporating a fully covariant treatment of errors). Additionally, we find that there are strong error correlations at high spatial wavenumbers that have so far been neglected in sensitivity derivations. These error correlations substantially degrade the sensitivity of arrays relying on contributions from long baselines, compared to what one would estimate assuming uncorrelated errors. I. INTRODUCTION Modern cosmological observations have produced exquisite constraints on both the initial and final conditions of structure formation in our Universe. Initial conditions have now been probed to high significance with a large number of cosmic microwave background experiments [1,2], while at low redshifts, a combination of galaxy surveys and traditional astronomical measurements provide the final conditions [3]. Still missing from these direct observations, however, are the intermediate epochs that bridge the gap between early and late times. For example, despite tremendous recent progress in highredshift galaxy observations, details regarding the formation of the first luminous objects and their effects on the intergalactic medium (IGM) during the Epoch of Reionization (EoR) remain uncertain. In the next few years, direct observations of the EoR will be made possible by measurements of the redshifted 21 cm hyperfine transition of neutral hydrogen (see, e.g., Refs. [4][5][6][7] for reviews). At the relevant redshifts, the intensity field of the 21 cm brightness temperature depends on a rich variety of different astrophysical effects, such as fluctuations in the ionization and spin states of the IGM, as well as cosmological quantities such as the underlying * acliu@berkeley.edu dark matter density field and peculiar velocity gradients. A map of the 21 cm intensity field at redshifts z ∼ 6 and above would therefore be a rich probe of EoR physics, including the nature of the first luminous sources (such as their typical mass and luminosity scales), their ionizing and heating efficiency, and feedback processes on the IGM, among other effects. Such a mapping can be accomplished in three dimensions, since the spectral nature of a 21 cm measurement provides redshift (and therefore line-of-sight distance) information, while the angular directions are mapped using traditional imaging. Because of this, the 21 cm line allows access to a large fraction of our Universe's comoving volume, potentially allowing futuristic measurements to move beyond astrophysics and into the measurement of fundamental cosmological parameters [8][9][10]. There are currently a number of experiments aimed at mapping the fluctuations of the cosmological 21 cm signal, including the Giant Metrewave Radio Telescope Epoch of Reionization experiment (GMRT-EoR [11]), the Low Frequency Array (LOFAR [12]), the Murchison Widefield Array (MWA [13]), and the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER [14]). These interferometer arrays have yet to make a positive detection of the cosmological signal, with the primary challenges being foreground contamination and the high sensitivity requirements. To increase sensitivity, these experiments are primarily targeting binned, statistical measures of the brightness temper-ature field such as the power spectrum. Recent progress has resulted in a number of increasingly stringent upper limits [11,15,16], and proposed next-generation instruments such as the Hydrogen Epoch of Reionization Array (HERA [17]) and the Square Kilometer Array (SKA [18]) promise to yield extremely high significance measurements. In addition to achieving the required sensitivity, observations targeting the redshifted 21 cm line must also contend with foreground contaminants. In the relevant frequency ranges (roughly ∼ 100 to 200 MHz, corresponding to z ∼ 13 to 6), there exist a large number of noncosmological sources of radio emission that contaminate measurements. These include sources such as the diffuse synchrotron radiation from our own Galaxy, as well as extragalactic point sources, whether they are bright and resolved or part of a dim and unresolved background. The brightness temperature of foregrounds is expected to be 10 5 times greater than theoretical expectations for the amplitude of the cosmological signal. A detection of the reionization power spectrum will therefore be challenging without a robust foreground mitigation strategy. Historically, cosmic microwave background (CMB) experiments have had to deal with similar problems of foreground contamination. However, strategies for foreground cleaning that have been developed for the CMB cannot be directly applied to 21 cm cosmology for two reasons. First, CMB experiments typically operate at higher frequencies, where foregrounds are not as bright. In fact, microwave-frequency foregrounds are subdominant to the CMB away from the Galactic plane. In addition, CMB experiments measure anisotropies over a twodimensional surface, with different observation frequencies providing consistency-checks and a set of redundant measurements that can be used for foreground isolation. The three-dimensional mapping of the 21 cm line, on the other hand, contains unique cosmological information at every frequency, which makes it more difficult to remove foregrounds in a way that does not result in the loss of cosmological signal [19]. Recently, however, a complication to this simple picture was realized, in what has been colloquially termed the "foreground wedge". Consider a cylindrically-binned FIG. 1. A schematic of the EoR window in the cylindrical k ⊥ k Fourier plane. At the lowest k ⊥ , errors increase because of limits on an instrument's field-of-view. High k ⊥ modes are probed by the longest baselines of an interferometer array, and the sensitivity drops to zero beyond k ⊥ scales corresponding to these baselines. Spectral resolution limits the sensitivity at large k . The lowest k are in principle limited by cosmic variance, but in practice the larger concern is limited bandwidth and the foreground contamination, which intrinsically resides at low k . As one moves towards higher k ⊥ , however, the foregrounds leak out to higher k in a characteristic shape known as the "foreground wedge". The remaining parts of the Fourier plane are thermal-noise dominated, allowing (with a large collecting area or a long integration time) a clean measurement of the power spectrum in this "EoR window". power spectrum measurement, i.e. one where Fourier amplitudes are squared and binned in annuli specified by wavenumbers perpendicular to the line-of-sight k ⊥ and wavenumbers parallel to the line-of-sight k . Because the line-of-sight direction is equivalent to the spectral axis of an interferometer, one might have naively expected smooth spectrum foregrounds to be sequestered to only the lowest k . However, this neglects the fact that interferometers are inherently chromatic instruments, with a given baseline probing finer spatial scales (higher k ⊥ ) at higher frequencies. This coupling of spectral and spatial information is sometimes coined mode-mixing, and results in the leakage of information from low to high k . This effect is particularly pronounced at high k ⊥ , where the modes are typically probed by longer baselines, which are more chromatic. Putting everything together, theoretical studies and simulations [32,[39][40][41][42][43][44] have shown that foregrounds are expected to leak out of the lowest k into a characteristic "wedge" that is schematically shown in Figure 1. Observations with PAPER and MWA have confirmed this basic picture [45], including its evolution with frequency [15]. The foreground wedge is both a blessing and a curse. At the sensitivity levels that have been achieved by current experiments, observations have seen a sharp drop-off in foregrounds beyond the wedge [45]. Theoretical calculations and simulations have shown that such a dropoff is the natural consequence of geometric limitations [32], provided the foregrounds are spectrally smooth. If further integration reveals low-level foregrounds that are spectrally unsmooth, their influence will leak beyond what is typically labeled as the edge of the wedge. However, if foregrounds continue to be reasonably smooth, the fact that physical considerations limit the extent of wedge implies that there must exist an "EoR window": a region in Fourier space that is a priori expected to be foreground-free. The existence of the EoR window thus enables a relatively robust foreground avoidance strategy, where a detection of the power spectrum can be made simply by avoiding measurements within the wedge. On the other hand, such a conservative approach forces one to work at higher k than if the chromatic effects had not caused the wedge in the first place. This is unfortunate because the ratio of the cosmological signal to instrumental noise typically peaks at low k, which means that if it were possible to work within the wedge (or to at least push back its boundaries a little), one would be able to make higher significance detections of the power spectrum. Indeed, in Ref. [17] it was suggested that working within the wedge can increase the detection significance anywhere from a factor of two to six (depending on the interferometer's configuration), with corresponding decreases in the error bars on astrophysical parameters. Given the potentially high payoff associated with pushing back the influence of the wedge (or equivalently, enlarging the EoR window), it is important to have a statistically rigorous framework for describing the wedge. In this paper, we provide just such a mathematical framework. Since there already exists an extensive literature on the foreground wedge and the EoR window, it is worth summarizing the ways in which this paper builds upon and extends previous results. Works such as Refs. [39][40][41] describe instrumental simulations that take one particular realization of foregrounds and propagate them through a power spectrum estimation pipeline. They therefore only probe the mean power spectrum, and not the scatter (i.e. the errors) about this mean. In Ref. [43], a statistical treatment of point source populations was considered. While error bars were computed, offdiagonal error correlations (i.e. covariances) in the final measurements were neglected. Potential error correlations are important particularly because Ref. [43] considered the application of various tapering functions to their Fourier transforms, and certain choices can result in significant correlations between Fourier bins. We consider full foreground covariances, a full treatment of instrumental effects (such as having a non-tophat beam), a full treatment of data analysis choices such as tapering functions, and a fully covariant propagation of errors. We build upon Ref. [42], which made use of Monte Carlo methods to propagate errors. Our treatment is more analytic, allowing us to capture the large dynamic range needed to accurately compute the error statistics in a measurement where the foregrounds are many orders of magnitude brighter than the signal. This is made computationally feasible by our use of the delay spectrum approach introduced in Ref. [32], where input frequency spectra are sorted into a set of time-delay τ modes via a per-baseline Fourier transform. However, unlike in some works where the delays are used as an approximation for line-of-sight Fourier modes (an assumption that is only valid for short baselines, as we will discuss in Section II), we use delays strictly as a convenient choice of basis. This basis makes it computationally possible for us to deal directly with visibilities in our formalism (bypassing any mapmaking steps), which avoids gridding artifacts in our numerical results. We also take into full account the correlations between partially overlapping baselines, and therefore rigorously treat the possible complications that were highlighted in Ref. [44]. A related treatment pertaining to lower-redshift 21 cm intensity mapping experiments (though focusing less on the details of the wedge) can be found in Ref. [35]. While our fiducial calculations are centered around instruments targeting the EoR, the techniques developed in this paper are equally applicable to cosmological 21 cm at lower (or higher) redshifts. We accomplish our goals by making use of the quadratic estimator formalism, which was adapted for 21 cm power spectrum measurements in Refs. [36,37], and applied to real data in Ref. [15]. However, appropriate "wedge effects" were not incorporated into the formalism, an omission that we rectify in this paper. Placing everything in the quadratic estimator formalism enables a systematic computation of the aforementioned error statistics, as well as a systematic study of the optimality (or lack thereof) of various power spectrum estimators. In a sequel paper (Ref. [46], henceforth "Paper II"), we will take advantage of this to examine the extent to which statistical methods can enlarge the EoR window. With our fully covariant treatment, we find that the wedge is not simply a region of large foreground errors and biases, but also as a marker for error correlations: at k ⊥ values where the wedge is a dominant effect, the errors tend to be strongly correlated. With strongly correlated errors, the number of independently measurable Fourier modes is reduced, suggesting that previous sensitivity estimates (such as those in Refs. [17,47,48]) may be overly optimistic, particularly for arrays that make use of long baselines (such as LOFAR or GMRT). In fact, the rewards for working within the wedge may be overrated as a result of this, but of course this cannot be quantified without a rigorous way to compute the error statistics of the wedge-hence the present paper. The rest of this work is organized as follows. In Section II we examine the measurement equation of an interferometer in detail, paying special attention to chromatic effects. This provides a first non-covariant preview of the foreground wedge, which we generalize to an approx-imate, but fully covariant description in Section IV, following a review of the quadratic estimator formalism in Section III. In Section V we discard the approximations made in Section IV in a full numerical implementation of our formalism. We summarize our conclusions in Section VII. Because a large number of mathematical quantities are defined in this paper, we provide dictionaries in Tables I and II for the reader's convenience. II. A NON-COVARIANT PREVIEW OF THE FOREGROUND WEDGE In this section, we will derive the Fourier-space foreground wedge from first principles. We begin with the visibility V measured by a baseline b at frequency ν: where θ is the angular sky position, I(θ, ν) is the sky temperature and A(θ/θ 0 ) is the primary beam, 1 with θ 0 denoting its characteristic width. For notational simplicity in this section, we will omit the instrumental noise contribution to the visibility, but of course it is always implicitly present. Our (arbitrary) convention for A is that it is dimensionless and is normalized so that A(0) = 1. In what follows, we will see that the foreground wedge arises from the fact that the product of ν and θ appears in the complex exponential. Fourier transforms in θ are therefore coupled to ν and vice versa, leading to the "modemixing" phenomena coined in Ref. [41] and ultimately the wedge. To mimic the discreteness of frequency channels in a real instrument, we introduce a function γ that describes the response of a single frequency channel. Our measurement equation therefore becomes 1 In general, the primary beam will depend on frequency, although for some instruments (such as PAPER) the antennas are intentionally designed to minimize the frequency-dependence of the beam [14]. In this paper, we will neglect the frequencydependence, because our goal is not to provide results pertaining to a particular instrument, but instead to provide a rigorous understanding of how foregrounds enter an interferometric power spectrum measurement. Including a frequency-dependent beam makes many of our analytic manipulations more difficult, which obscures the key physical effects that give rise to the foreground wedge. We note, however, that our general strategy of incorporating an interferometer's measurement equation into the quadratic estimator formalism is one that is capable of including frequency-dependent beams, albeit at a slightly greater computational cost. Taking a Fourier transform of the frequency spectrum of data from a single baseline essentially amounts to taking a Fourier transform along a solid line in this figure. This is a good approximation to taking a Fourier transform along the "true" frequency axis for short baselines. with B chan denoting the width of a frequency channel, and γ normalized such that ∞ −∞ γ(x)dx = 1. So far, we have no choice in the matter, in that the results of Eq. (2) are handed to us by the instrument. Moving onto data analysis, however, there is considerable freedom as to how one proceeds. For example, we may choose to move into delay-space, which is accomplished by taking the Fourier transform in frequency of the spectrum measured by a single baseline (a "delay transform"): where τ is the delay (with units of time), B band is the bandwidth over which we wish to compute a power spectrum, and ν 0 is the central frequency of our band. The function φ is normalized so that φ(0) = 1, and captures both the bandpass of our instrument and any tapering that one may wish to impose near the band edges. (In what follows, we will therefore use the terms "tapering function" and "bandpass" interchangeably to describe φ). Precisely what form the edge tapering takes is a data analysis choice, and as shown in Ref. [43], choosing a good tapering function minimizes leakage of foregrounds in the Fourier plane. Later on, we will extend the work of Ref. [43] to self-consistently incorporate the effect that a tapering function has on error covariances. Aside from the peculiarities of certain tapering functions, working in delay space is simply a change of basis, and as emphasized in Ref. [32], represents no loss of generality. In later sections, we will find that delay space is a particular effi- Sky brightness temperature at angle θ and frequency ν Eq. Visibility response at delay τ of baseline b to sky mode on spatial scale u and spectral scale η. Eq. (17) g(u, η; b, τ ) Same as h(u, η; b, τ ) but integrated over direction on uv plane perpendicular to baseline vector Eq. (30) cient basis to work in, one that makes many of the large matrices required for power spectrum estimation sparse (see Appendix C for details). At this point our data are in a basis specified by baseline vector b and delay mode τ . This basis closely approximates the Fourier basis that the power spectrum inhabits, but the correspondence is not exact and in general must be accounted for. Concretely, suppose we let u be the Fourier dual of θ and η be the Fourier dual of ν. Since the transverse co-moving distance can be used to convert angles on the sky to transverse comoving distance, we have u ∝ k ⊥ (the exact conversion is given in Appendix A, where we define our Fourier conventions). Similarly, the observed frequency of a spectral line can be mapped to a co-moving line-of-sight distance, so η ∝ k . At a particular frequency, a single baseline b of an interferometer roughly measures the spatial Fourier mode specified by wavenumber u = b/λ [this can be seen by applying Parseval's theorem to Eq. (1) and assuming that the primary beam is wide]. Across different frequencies, however, we see that a single baseline probes different spatial Fourier modes. A Fourier transform of a single baseline's spectrum [Eq. (3)] is thus a Fourier transform along one of the solid lines in Figure 2, where we show the chromaticity of various baselines. As pointed out in Ref. [32], with short baselines it is an excellent approximation to say that η ∼ τ . In this paper, we do not make this "delay approximation", as we consider baselines of TABLE II. Dictionary of vectors and matrices. The quantities shown here are grouped into three categories: those that exist in the vector space of the visibility measurements (indexed, for example, by baseline and delay), those that exist in the vector space of bandpowers (indexed by Fourier wavenumbers), and those that bridge the two spaces. In the column giving the length/dimensions, N bl denotes the number of baselines, Nν the number of frequency channels, and N bands the number of bins in Fourier space (i.e. the number of bandpowers). To properly account for the mapping between the "true" Fourier coordinates (u, η) to our visibilities expressed in the (b, τ ) basis, we can express the sky temperature I(θ, ν) in terms of its "true" Fourier transform, I(u, η): Inserting Eqs. (2) and (4) into Eq. (3) gives The integral over ν can be simplified because A is a much broader function (with a characteristic width of cb/θ 0 ) than γ (which has a characteristic width of B chan ). For a typical EoR experiment, one might have ν 0 = 150 MHz and B chan = 50 kHz. Even with a rather conservative θ 0 = 1 radian, A is wider than γ for any baseline shorter than ∼ 3000λ. With compact arrays giving the highest sensitivity [17,47], very few interferometers that are optimized for an EoR measurement have any relevant sensitivity on baselines this long. As a result, A can be factored out of 2 Note that the distinction between τ modes and η modes implies that our tapering functions differ slightly from those examined in Ref. [43]. In this paper, the tapering functions are applied to the per-baseline Fourier transform, rather than the Fourier transform along the "true" frequency axis of Figure 2. the integral and evaluated at ν = ν, leaving a Fourier transform over the ν variable. Defining γ to be the Fourier transform of γ, this leaves With our approximations, then, the effect of having frequency channels with non-zero width is to envelope our response in η: limited spectral resolution makes the array less sensitive to high η modes (i.e., rapidly fluctuating spectral modes). To further simplify our expression, it is useful to orient our u ≡ (u, v) axes so that the u axis is in the same direction as the baseline vector b. If we further assume that the antenna's footprint on the uv plane is separable, i.e. then our expression becomes To proceed beyond this point requires specific forms for A b and γ. However, it is instructive to examine various limits. For an instrument with short baselines and/or narrow fields of view satisfying bθ 0 c/B band , A b is a slowly varying function that can be factored out of the integral, yielding where we used Eq. (7) to recombine A b and A b⊥ . This gives the "usual" description of an interferometric measurement: each baseline samples a portion of the sky in uvη space, defined by the antenna's footprint on the uv plane and the Fourier transform of the bandpass shape in η, enveloped by the Fourier transform of a frequency channel's profile. On the other hand, for an instrument with long baselines and/or wide fields of view satisfying bθ 0 c/B band , it is φ that is broad compared to A b , and Eq. (8) is well-approximated by 3 We see that in this limit, the bandpass shape φ and the primary beam A b swap roles: the bandpass acts as the convolution kernel on the uv plane, while the primary beam acts as the convolution kernel in the η-direction. When bθ ∼ c/B band , the full expression given by Eq. (8) interpolates between the two extremes given by Eqs. (9) and (10); in general, the bandpass shape and the primary beam both play a part in determining the uv plane and ηdirection convolution kernels. This point was emphasized in Ref. [32]. Returning to the long baseline limit of Eq. (10), we can see our first glimpse of the foreground wedge. Suppose one were dealing with flat-spectrum foregrounds, where I(u, η) = I 0 (u)δ(η), so that there is no signal beyond η = 0. With such a sky, the measurement becomes pro- . Suppose we now make use of the delay approximation [32], where the quantity | V (b, τ )| 2 can be treated as an estimate of the power spectrum P (u, η) at u ≈ ν 0 b/c and η ≈ τ . The result is Thus, even flat spectrum sources (which would naively only have power at η = 0) gives a non-zero measured power spectrum at higher η. If the primary beam is zero beyond some argument value (which is always true in some sense, since A is identically zero below the horizon), then the power extends only to a finite region on the uηplane. For example, if we define θ 0 to be the angle at which A drops to zero (we are free to do so, since θ 0 is simply a characteristic scale that we have so far kept general as simply "some characteristic scale"), Eq. (11) predicts that the line should be a sharp boundary between zero and non-zero power. Switching from angular/spectral Fourier coordinates u and η to comoving spatial wavenumbers k ⊥ and k using the relations in Appendix A gives where H 0 is the Hubble parameter, Ω m is the normalized matter density, and Ω Λ is the normalized dark energy density. This is precisely the "usual" formula given for the edge of the foreground wedge (e.g., in Refs. [15,32,41,42]). As a function of η (or k ), the wedge has a profile given by the square of the primary beam profile, scaled by u in such a way that power is found at higher η for higher u. In presenting a preview of the wedge, we have made a number of assumptions that will be relaxed in the following sections. For example, we will no longer approximate τ modes as η modes. Nor will we assume that each baseline cleanly samples just a single value of u. In fact, our use of the delay approximation here was somewhat inappropriate-while one may always take a delay transform, we have seen (e.g., from Figure 2) that the delay approximation (i.e., assuming τ ∼ η) ought to work well only if baselines are short. By using Eq. (10) instead of Eq. (9), however, we are expressly working in the longbaseline limit. It is thus important to emphasize that the preview shown here is included only to build intuition, and a much more rigorous treatment will be presented in later sections. There, we will also generalize to a sky with an arbitrary power spectrum. While there will be minor alterations to details of the foreground wedge, the basic picture will remain intact. In the strictest sense, the derivation that we have just presented is nothing new. We have simply re-derived a number of results (e.g. the existence of the wedge) that are already known in the literature. However, our rederivations have been part of an analytic formalism, confirming a number of numerical results while bringing together many known (but previously separate) features in a unified framework. In the next section, we will extend this framework to include a fully covariant description of the wedge, including correlated errors in such a way that extends the quadratic power spectrum estimation techniques of Ref. [36] to include wedge physics. Setting up such a framework allows one to systematically examine the statistical properties of various power spectrum estimators in light of the wedge, a study that we perform in Paper II. III. QUADRATIC ESTIMATOR FORMALISM Until now, our focus has been on the measurement of a visibility, which is linear in temperature. The power spectrum, however, is a quantity that depends quadratically on temperature. In this section we very briefly review the mathematical machinery that makes possible the fully covariant description of the EoR window we will present in Sections IV and V. The basis of our discussion will be the quadratic estimator formalism, which has a long history in the CMB and galaxy survey literature (e.g., Refs. [49][50][51]), and was explicitly adapted for 21 cm cosmology in Refs. [36,37]. The central quantity that encodes our instrument (and therefore the statistical properties of our estimators) is the data covariance matrix C. To form the covariance matrix, imagine that our input data (organized by baseline and delay mode) is serialized into a data vector x, i.e., The covariance matrix is then given by C ≡ xx † . Importantly, we will keep track of all off-diagonal elements in the matrix, so that all correlations between different baselines and different delay (or spectral) modes are taken into account. Knowledge of these correlations will allow us to formulate both a covariant description of the foreground wedge and the tools to fight its contaminating influence. Although we omitted noise contributions in the previous section for notational cleanliness, they of course contribute to the variance captured by C. Assuming that instrumental noise is uncorrelated with sky signals, the noise appears as an additive term to the sky covariance, so that we can write C as where S is the sky signal portion of the covariance. Inserting Eq. (6) into our general definition of the data covariance, we obtain where we have defined and have made use of the definition of the power spectrum P (u, η): As we have written it, our power spectrum is expressed in terms of u and η, instead of the more common combination of k ⊥ and k . Doing so minimizes the number of cosmological quantities in our expressions, since u and η are the more "natural" quantities from the perspective of the instrument. This represents no loss of generality, and indeed, in Section V we will express our results in terms of k ⊥ and k . To form a practical estimator for the power spectrum, it is necessary to discretize. Assuming that the power spectrum possesses cylindrical symmetry 4 (so that the power depends only on the magnitudes of u and η), one can imagine binning the uvη space into a series of annuli, each specified by a radius |u| = √ u 2 + v 2 and "vertical distance" |η| away from the η = 0 plane. Each annulus can then be represented as a small cell on a |u|η plane (which we will henceforth call the uη plane to conform to convention). 5 As long as these cells are made sufficiently small, the power spectrum can be approximated by a constant bandpower p α in each cell, with α indexing a serialized list of locations on the uη plane. With this approximation, our covariance can be written compactly as where C ,α ≡ ∂C/∂p α is the response of the covariance to αth bandpower, and is given by (20) with V α denoting the annular volume that is binned into the αth uη cell. If desired, the sky contribution of the 4 The true power spectrum of the cosmological signal is of course spherically symmetric (thanks to statistical isotropy), and thus can be further binned. However, systematics such as foregrounds tend to be cylindrically symmetric, since the instrument probes the line-of-sight direction differently than it does the angular directions [52]. Another effect to consider (though it is beyond the scope of this paper) is that of redshift-space distortions, which will break spherical symmetry. The cylindrical power spectrum is therefore a useful diagnostic quantity to compute prior to the formation of a spherical power spectrum. Note that in general, we need not make any assumptions about symmetry in our estimation formalism. If desired, our formalism can be used to estimate P (u, η) without even the cylindrical binning step. The presence of symmetry, while useful for increasing signal-to-noise, is much less important than other assumptions such as the smoothness of foregrounds. (Although see Section VI for a discussion of strategies for dealing with unsmooth foregrounds). 5 We note an unfortunate bit of notation: u is used both to denote the first coordinate on the uv plane as well as the magnitude of the u vector. Unfortunately, both usages are standard. covariance can be further divided into separate contributions from foregrounds and the cosmological signal: where C fg is the foreground covariance, and p sig α is the αth bandpower of the cosmological signal only. Eq. (21) is more general than Eq. (19), because the latter implicitly assumes that C fg is given by α p fg α C ,α , where p fg α is a set of foreground bandpowers. This assumption holds true only when the foregrounds are describable as a power spectrum. In the quadratic estimator formalism, the bandpowers are extracted by forming weighted pairwise combinations of the data vector x. In particular, one can form a quadratic estimator p α of the true bandpower p α by computing 6 where E α is an estimator matrix of weights to be used for weighting pairwise products of the data when estimating the αth bandpower. To see how this works, consider (as a toy example) a noiseless cosmological survey with uncorrelated real-space measurements in a three-dimensional volume. Further suppose that one's goal is to measure the unbinned, three-dimensional power spectrum P (k), i.e., p α = P (k α ). If x is expressed in a real-space basis (so that it is simply a serialized list of real-space voxel intensities), a sensible choice for the estimator matrix would be E α ij ∝ e −ikα·(ri−rj ) , where r i and r j are the position vectors of the ith and jth voxels, respectively. We therefore see that in this example, the role of E α is to take a Fourier transform of the data. If one desires estimates of a binned power spectrum (for example, one where statistical isotropy allows the binning of power over shells of constant |k|), the relevant E α for different k α are simply averaged together in each bin. Now suppose instead that our data are expressed in a Fourier basis, so that each element of the data vector x represents the Fourier amplitude at some location in k space. The estimator matrix is then even simpler, and in fact becomes diagonal, with E α ij = δ αi δ ij . In our current application, the data are organized by baseline b and delay τ . As discussed above, b and τ closely approximate the Fourier wavenumbers u and η in some regimes, but the correspondence is not perfect. For our application we would therefore expect E α to be diagonal-dominant, but not be perfectly diagonal. (For an explicit form, see Sections IV and V). In general, correlated errors and other instrumental effects (such as the ones that we seek to model in this paper) make the estimator matrices more complicated than they were in our pedagogical examples. They will typically involve the C ,α matrices, since those provide the link between the "input" space that the data covariance inhabits, and the "output" space of bandpowers. The detailed form of the family of E α matrices is a choice made by the data analyst, and different choices yield estimators with different statistical properties. One such property is the error covariance Σ αβ ≡ p α p β − p α p β of our estimated bandpowers, which is given by 7 a result that can be derived by direct substitution of Eq. (22) into the definition of the error covariance. The error bars on our bandpower estimates are given by ∆p α ≡ (Σ αα ) 1 2 , but it is important to note that the error covariance contains much more information than just the error bars: off-diagonal elements of the covariance encode correlations between different uη cells of the cylindrical power spectra. With a fully covariant formulation of the errors, it is possible to over-resolve in uη space, evading the commonly-made assumption that uη cells are independent (i.e., have a diagonal covariance matrix) so long as they are more than 1/B band apart in the η direction and separated by more than the width of A in the u direction. While this assumption is prevalent in the 21 cm cosmology literature for reasons of computational simplicity, we will see that it is one that should be avoided. More precisely, we will see in Section V that the errors on the uη plane are not independent at high k ⊥ (including the foreground wedge region), necessitating a full accounting of the entire error covariance matrix, and not just its diagonal elements. In addition to the error covariance, the quadratic estimator formalism also allows the computation of biases and window functions. Taking the expectation value of Eq. (22), recalling that C ≡ xx † , and inserting Eq. (21) gives where we have defined the window function matrix 7 While we write all vector and matrix quantities in boldface, it is important to note that there are two different vector spaces at work here. The error covariance Σ and the window function matrix W defined later inhabit the "output" vector space indexed by locations on the uη plane, unlike matrices such as N and C, which inhabit the "input" vector space indexed by baseline and delay. Hybrid quantities include C,α and E α , which can either be thought of as a family of matrices in the input vector space, or as rank-3 tensors in a combined space. and the contamination bias From Eq. (24), we see that each row of the window function matrix gives a window function that specifies the linear combination of the true bandpowers that each estimate of a bandpower represents. Typically, the E α matrix is normalized such that each row of W sums to unity, allowing the linear combinations to be interpreted as weighted averages. 8 The contamination bias represents an additive bias to the estimated power spectrum that arises due to residual noise and foregrounds in the data. In practice, cross-correlation techniques-such as forming crosspower spectra between odd and even time samples of data, as was done in Ref. [15], or between different subsets of redundant baselines in an array, as was done in Ref. [16]-allow the noise bias to be eliminated without any explicit bias subtraction. The bias that one needs contend with is therefore solely comprised of the foreground bias: If a perfect foreground model is available, this expected level of this bias can be computed and subtracted from the power spectrum estimate. However, because a detailed knowledge of the low-frequency sky is as-yet unavailable, this subtraction step is often omitted to avoid over-subtractions that destroy cosmological information. Instead, one simply hopes that the bias is small in regions of the uη plane where one wishes to make a power spectrum measurement. In the following sections, the error covariance Σ, the window function matrix W, and the bias b α are the quantities that will provide us with a detailed, covariant picture of the EoR window and the foreground wedge. The bias essentially captures the power spectrum contribution from noise and foreground contaminants, and corresponds to the foreground wedge signatures seen in various simulations in the literature. The window functions provide an alternate view of the wedge: the wedge can be thought of as a leakage of power from low to high η (or equivalently, k ) modes that becomes increasingly pronounced as u (or k ⊥ ) increases. For a wedge to exist, window functions for bandpowers centered at high η and high u must have tails that extend to low η, where foregrounds live. Finally, the error covariance provides an estimate of the error bars throughout the Fourier plane (including the wedge region), as well as a quantification of how the chromatic response of an interferometer can cause error correlations between otherwise uncorrelated uη cells. IV. A COVARIANT DESCRIPTION OF THE FOREGROUND WEDGE FOR A BASIC ESTIMATOR In this section, we use the quadratic estimator formalism to derive the foreground wedge and the EoR window for a "basic" estimator of the power spectrum. We will make a number of approximations for the sake of analytical tractability, leaving an exact numerical treatment to Section V. The goal here is to formalize the discussion from Section II to obtain a fully covariant, analytic description of power spectrum properties at high u and low η, where the foreground wedge resides. This will provide a basic picture of the foreground challenges that we face, setting the stage for Paper II, where we look at how these challenges can be mitigated with better estimators. The relatively simple estimator that we will examine in this paper is specified by the relation where N is the instrumental noise covariance and M α is a normalizing scalar 9 for each bandpower α. This choice gives an estimator that is quite similar to the crude | V (b, τ )| 2 estimator discussed informally in Section II. The principal difference between what we will consider here and our previous estimator is the presence of C ,α . The role of C ,α is two-fold. Its first purpose is to complete a signal-to-noise weighting of our data: the copies of N −1 downweight noisy data, while C ,α (by virtue of its being the derivative of C) upweights the high signal portions. The second purpose is to map the data from the input baseline b and delay τ space to the output uη space. Recall from Section II that while b and τ approximate u and η, respectively, the correspondence is not perfect. Applying C ,α completes the transition to Fourier space, a fact that will become more apparent when we write down an explicit form for the matrix. Studying the basic estimator given by Eq. (28) is worthwhile because it is approximately equivalent to the methods used in a number of state-of-the-art 21 cm power spectrum pipelines for analyzing observations and simulations [43,44,53]. These pipelines typically use an 9 In this paper, we do not consider the more general possibility of a matrix-based normalization, where instead of a simple multiplicative normalization, one multiplies the unnormalized bandpower estimatesp β by a matrix M to form the normalized bandpowerspα, i.e.,pα = β M αβp β . For details, please see Ref. [15] or Paper II. optimal mapmaking approach [54,55] to first go from visibilities to a gridded uvη data cube of Fourier amplitudes. The complex magnitudes of these amplitudes are then squared and binned to estimate power spectrum bandpowers. (Note that while the mapmaking may be optimal in this case, the subsequent power spectrum estimation is not). In Appendix B, we will prove that in the limit of infinitely fine bins in Fourier space, such pipelines are equivalent to estimating power spectra directly from the visibilities using Eq. (28) and the quadratic estimator formalism. Our numerical results will therefore be roughly representative of those seen in the aforementioned pipelines, but with fewer gridding artifacts because we go straight from visibilities to power spectra. For analytical tractability here and numerical tractability in later sections, we will use an approximate form for the covariance matrix: and we have once again omitted the instrumental noise contribution to the covariance for simplicity. Superficially, this looks quite similar to Eqs. (16) and (17). However, in this case we have gone beyond simply forming C ≡ xx † from Eq. (8), in that we have performed the integral over A b⊥ . This is not always permissible, and represents a subtle additional approximation: we have assumed that baselines that are similar in length but very different in orientation have a negligible correlation with each other, and that those with similar orientations are correlated as though they were identical in orientation. In other words, we assume that although two baselines can be completely uncorrelated, partially redundant, or perfectly redundant in the direction of the baseline vector, 10 overlaps between baselines in the transverse direction are treated in a binary fashion, so that the overlap is either zero or perfect. This assumption was inherited from the derivation of Eq. (8), which required a re-orientation of the axes of the uv plane so that the u axis would lie along the direction of the baseline. While this can always be done for a single baseline, the covariance matrix C encodes correlations between different baselines, which may be oriented differently. It is thus strictly speaking incorrect to form a covariance matrix from Eq. (8), and in principle one should use Eq. (6) instead. For the purposes of intuition, however, we may continue with our approximate expression as long as we remember that distant baselines have negligible correlation. As we previously mentioned, C ,α provides the crucial link between the input data and the output Fourier space. It therefore forms a crucial component of any error statistic. Working in the limit of a continuous (rather than discrete) set of bandpowers, we may differentiate C with respect to P (u α , η α ) to obtain where we used the fact that P (u, η) can be written as du α dη α P (u α , η α )δ(u − u α )δ(η − η α ). Inserting this into Eq. (28) provides a concrete example of the general proof of equivalence given in Appendix B. One sees that each copy of g acts on a noise-weighted copy of the data vector x. Examining Eq. (30) reveals that the action of g is to Fourier transform the delay spectrum back into the frequency domain, apply another copy of the tapering functions, grid the result at the appropriate location on the uv plane, and then Fourier transform in frequency again, before adjusting for frequency channel discretization by an additional weighting in η. This is precisely the procedure that one would follow with a mapmaking algorithm in uvη space [55]. The result is then squared to form a power spectrum. While this particular example may at first sight seem to render the delay basis obsolete (since the first action of g is to transform back to a frequency spectrum), it is important to remember that in a more realistic case, one may be unable approximate the bandpowers as being continuous. For example, at small u and η, bin sizes may be comparable to the values of u and η themselves. Many of the algebraic simplifications used in this section then become inapplicable, necessitating full numerical manipulations of the relevant matrices, which are typically more computationally efficient in the delay basis (as we discuss in Appendix C). Continuing with our approximation scheme for this section, however, Eq. (32) is particularly convenient for computing our suite of error statistics because it is separable. Taking advantage of this, the window functions for our basic estimator reduce to where we defined the shorthand g α ≡ g(u α , η α ; b, τ ), and in the last step assumed that our delay bins were fine enough to be approximated as being continuous. This form for the window function matrix has a straightforward geometric interpretation: when estimating the αth bandpower, one probes a mixture of the true bandpowers; the amount of the βth band that is included in the estimate of the αth band is given by the overlap of our interferometer's response to the αth and βth bands. We stress, however, that this simple form does not hold when one considers more complicated estimators such as the ones that we will consider in Paper II. Let us now compute some example window functions. Just as we did in Section II, we can gain some analytic intuition by working in the short and long baseline limits. For short baselines satisfying bθ 0 c/B band (or equivalently, if the time-delay τ of a signal between the antennas of the baseline satisfies τ 1/θ 0 B band ), we may invoke the same approximations that led to Eq. (9), and say that Inserting Eq. (34) into Eq. (33) and evaluating the integral over τ yields where φ 2 signifies the Fourier transform of φ 2 , not the square of φ. This expression is in line with what one might intuitively expect from interferometry: the spatial u-dependence of the window function is controlled by the primary beam (or more precisely, its Fourier transform), while the spectral η-dependence is controlled by the bandpass. On the other hand, with long baselines satisfying bθ 0 c/B band we can use the approximations that led to Eq. (10), obtaining Once again, we may insert this into Eq. (33) to get Now, consider first the η-dependence (rather than the u-dependence) of the window functions, since our principal worry is that smooth, low-η foregrounds might scatter to higher η because of the instrument's chromaticity. If we suppress the u dependence by setting u = u α = u β , we obtain Eq. (38) predicts that with our basic estimator, foregrounds should appear in the now-familiar wedge in uηspace. To see this, consider the additive foreground bias for a hypothetical single-baseline interferometer. Since we are concerned with foregrounds, the most relevant regions of the uη plane will be the low η regions. We may therefore safely ignore the γ 2 terms, since at low η they will be approximately unity anyway. If we imagine that such foregrounds are described by a power spectrum p fg α , the foreground covariance is given by α p fg α C ,α , and the foreground contribution b fg α to the bias in Eq. (26) is given by where we have suppressed the u dependence of the foreground power spectrum for notational cleanliness. Now, if we approximate the foregrounds as being completely comprised of flat spectrum sources, then P fg (η) ∝ δ(η), and our foreground bias becomes This equation provides a general mathematical form for the profile of the foreground wedge, as a function of η, and reduces to previously-derived special cases for the wedge profile in the limit of top-hat primary beamshapes and bandpasses [42]. Had we retained the φ 2 terms, they would have enforced the condition that u ≈ bν 0 /c. We therefore see that foreground contamination ought to leak to higher η for higher values of u, since those are probed only by larger b. Importantly, we emphasize that this foreground wedge feature is basis-independent, in that the final result makes no mention of delays. Our choice in this paper to express visibilities and covariances using a delay basis (rather than, say, a frequency basis, i.e. spectra) is a choice that is computationally convenient (see Appendix C for details), but is fundamentally an arbitrary one. The same is true regarding our indexing of measurements by baseline. [The baseline length b that appears in Eq. (40) is an expression of array configuration rather than basis choice; indeed, Eq. (33) shows that all baseline indices are summed over]. The foreground wedge is purely a function of our instrument's design and the form of our power spectrum estimator. With multiple baselines, Eq. (38) contains cross-terms between different baselines. Working once again at low η and high u (long baselines), one has While this expression is certainly more complicated than the one we had before, the same basic picture holds: the window functions can be quite broad in the η direction, thus allowing foregrounds to be scattered from low to medium values of η. But in general, the resulting contamination is still enveloped by terms like (A b * A b ) c bθ0 η , which limits the possible contamination at high η. Let us now turn briefly to the behavior of the window functions as a function of u. With the short baseline limit already provided by Eq. (35), we once again focus on the long-baseline limit. Setting η = η α = η β to isolate the u-dependence, Eq. (37) becomes where A 2 b denotes the Fourier transform of the square of A b , not the square of the Fourier transform. As expected, a baseline b roughly probes a u-scale equal to bν 0 /c. Additionally, the window functions peak at u α = u β , a fact that is enforced by the appearance of the two copies of φ as a product, as well as by the presence The width of the window function in the u direction therefore depends on both the primary beam and the bandpass function. The A 2 b term has a characteristic width of θ −1 0 , while the φ functions have a characteristic width of B band b/c. Now, the window function involves the product of these functions, which means that the width of its central portion will be determined mostly by the narrower of the two contributions. In the long baseline limit that we are working in, θ −1 0 B band b/c by definition, so the A 2 b part is the narrower contribution and determines the width of central portion of the window function. We thus predict that the central width is just θ −1 0 , and does not depend on the u (or equivalently, k ⊥ ) value on which the window function is centered. However, simply characterizing the width of the central peak is insufficient for our purposes. As emphasized throughout this paper, the large dynamic range that exists between the bright foreground emission and the dim cosmological signal means that it is important to accurately capture the weak, low-level wings of the window functions, away from the central peak. These wings will be controlled by the broader contribution in Eq. (42), namely, the product of the bandpasses. As stated above, the bandpasses have a characteristic width B band b/c, and since baselines of length b probe spatial scales given by u ∼ ν 0 b/c, our window function wings will have a width ∆u of The widths of the window function wings are therefore proportional to u, and grow as u increases. Equivalently, since ∆u ∝ u, the fractional wing width is constant, and the wings will appear to have the same width on a logarithmic u (or k ⊥ ) scale. This is intuitively unsurprising, as longer baselines probe a greater spread of spatial scale due to their greater chromaticity, and the width of this spread is proportional to the baseline length (see Figure 2). Since a given u mode is mostly accessed by baselines of length b ∼ uλ, one is then driven to the conclusion that ∆u ∝ u. V. A NUMERICAL MODEL OF A BASIC ESTIMATOR Having made various approximations in the previous section to enable an analytic treatment of the foreground wedge, we will now discard most of these approximations in lieu of an exact numerical treatment of our basic estimator. We will find that the basic picture that we presented above remains unchanged. A. Instrument and foreground model The model instrument that we consider in this paper is intended to reflect a typical design for an interferometer optimized for 21 cm power spectrum (rather than one that is intended to function as a general-purpose lowfrequency radio observatory). Maximizing power spectrum sensitivity requires antennas to be placed in a way that yields a large number of short, identical baselines [17,47]. With this in mind, we perform our computations for a square, 20 by 20 array of antennas, with a 14 m spacing between adjacent antennas. Each antenna is assumed to have a Gaussian primary beam, with a fullwidth-half-max (FWHM) of 40.5 • that is approximated as frequency-independent in the B band = 8 MHz band that we consider. The frequency width of each individual spectral channel is set at B chan = 50 kHz. With no loss of qualitative generality, we consider only observations centered around ν 0 = 150 MHz. The formalism presented in this paper applies to all redshifts accessible to a 21 cm interferometer, and none of the "lessons learned" in our analysis are substantially changed by examining a different redshift. (3)] to be Gaussian, even though previous studies in the literature have argued for more desirable choices such as Blackman-Harris function or a Blackman-Nuttal function [16,43]. Using a Gaussian allows us to compute analytically compute the ν integral in our measurement equation [Eq. (8)], giving: where the characteristic scale of the beam θ 0 is given by the standard deviation of our Gaussian beam, which in our case is θ 0 = FWHM/ √ 8 ln 2 = 17.2 • . Intuitively, one sees that each delay mode of each baseline probes a reasonably localized region in uvη space. One also sees that there exists a complex exponential term that mixes spatial and spectral information, which is to be expected given the chromatic nature of an interferometer's synthesized beam. Following this, we form the covariance matrix under the same coherency approximation as the one we employed in the previous section: two baselines may have any amount of overlap on the uv plane in the direction parallel to their baseline vector, but are either completely non-overlapping or perfectly overlapping in the direction perpendicular to the baseline vector. This allows the integral over v (defined to be the direction on the uv plane perpendicular to a pair of correlated baselines) to be evaluated analytically. The sky signal portion of our covariance matrix is then where α i ≡ 2πθ 0 B band b i /c and similarly for α j . We take the frequency channel response γ to be a Gaussian, which makes its Fourier transformγ also a Gaussian. Computing C ,α is very similar to computing S. Since instrumental noise is random and does not depend on the power spectrum, we have C ,α = S ,α . To find C ,α , then, we simply need to evaluate the integrals in Eq. (45), but with u and η integration limits chosen to match to the band in question, rather than being −∞ and +∞. Having described our instrument and how it manifests itself in the covariance of our measurements, the final ingredient that we require for our numerical calculations is a model for the total power spectrum P (u, η). We model the total power spectrum as the sum of the cosmological power spectrum and a foreground power spectrum. For the cosmological power spectrum, we use the spherically symmetric power spectrum provided in Ref. [56] and assume statistical isotropy to compute the cylindrical power spectrum needed for our covariance. As for the foregrounds, we consider a relatively simple twocomponent power spectrum model P fg : where A is an overall normalization, C diff is the angular power spectrum of the diffuse Galactic emission, and ν diff c is its frequency coherence length. The corresponding quantities for point sources are given by C ps =2πu and ν ps c . The η dependence of this parametric form is motivated by the mathematical results of Ref. [26]. In that work, empirically-motivated models of foreground spectra were put through an eigenmode analysis, and it was found that the resulting set of eigenmodes were essentially Fourier modes in frequency (i.e. η modes). The eigenvalue spectra were well-fit by a linear exponential in η, with a coherence frequency of 64.8 MHz. For simplicity, we adopt this value for both ν ps c and ν diff c . To model the angular structure of the diffuse Galactic emission, we use the Global Sky Model (GSM) software [57] to generate a model of the sky at 150 MHz. We then compute the angular power spectrum of this model, which we find to be well-fit by with a 1 = −1.450, a 2 = 0.1003, b 1 = 0.7666, and b 2 = −2.365. For notational simplicity, we have omitted the overall normalization of C diff , as it can be absorbed into A. We note that while only the high (power-law) portion of the angular power spectrum is typically modeled in foreground studies, it is crucial to include the low behavior as well. To see this, note that Eq. (45) predicts a substantial overlap between the response of the shortest baselines and the u = 0 mode of the power spectrum. This is simply reflecting the fact that the sky is not infinite in extent. Thus, even though the autocorrelation/zero-spacing baseline products are typically discarded from interferometric data, the instrument may still be sensitive to the zero mode of the sky. For the point source contribution to foregrounds, we neglect clustering for simplicity, and therefore take C ps =2πu to be a constant. We fix this constant by assuming that at = 1000, the amplitude of the point source angular power spectrum is roughly a factor of 10 smaller than that of the diffuse emission [58]. We therefore set C ps =2πu = 0.1 C diff =1000 . Using a simple, -independent angular power spectrum for point sources is not a required approximation for our formalism, and this assumption can be easily relaxed. In principle, including clustering would boost the point source power at low modes [59]. In practice, however, the low regime is dominated by the diffuse Galactic emission anyway, and we do not expect that an inclusion of clustering would qualitatively impact our numerical results. We fix the overall normalization A of our foregrounds by considering the zero-mode of the power spectrum. We require that where I GSM = 433 K is the mean temperature of our GSM foreground template. Finally, we must add the noise contribution to our covariance. The computation of a noise covariance matrix N is rather subtle, given the assumptions that we have made above regarding baselines that overlap in directions perpendicular to their baseline vectors. We first sort the baselines of our array by baseline length into 54 equallyspaced bins. If only one baseline fell into each bin, the noise variance assigned to each bin would be [47] N ii where t is the integration time (taken to be 520 hrs), T sys is the system temperature, and Note that our expression for the noise variance differs from equations that are commonly seen in the literature in two ways. First, our variance is proportional to B band . This is simply due to the fact that we are working in a delay basis rather than a frequency basis. In addition, the beam area used here is the integrated square of the beam profile (rather than just the integral of the beam profile itself), which was shown in Ref. [16] to be the correct beam area to use for this calculation. With our Gaussian primary beam, Ω pp = πθ 2 0 . We assume a skynoise dominated instrument and set T sys = I GSM . We now adjust for the fact that each baseline length bin contains more than just a single baseline. If there are n(b i ) baselines within a particular bin, we simply divide the noise variance by n(b i ), assuming that a combination of instantaneous redundancy and rotation synthesis allow a large fraction of the baselines to be combined coherently, prior to forming a power spectrum. (In practice, this is a somewhat optimistic assumption, given that baselines that are dissimilar in orientation can only be combined statistically, and not coherently). Further assuming that the noise covariance matrix is diagonal, our final form for N is where the Kronecker delta function ensures that instrumental noise is uncorrelated both between baselines and between delay bins. Admittedly, the instrument and noise models presented in this section are quite crude and make use of a large number of simplifying assumptions. The assumptions regarding instrumental noise and rotation synthesis, in particular, are quite optimistic. However, the numerical computations that we perform in this paper are not designed to be definitive sensitivity calculations. Rather, the goal is to use our rough model-which captures the essential features of large-amplitude smoothspectrum foregrounds and lower-amplitude broadband instrumental noise-to gain some statistically rigorous intuition for the EoR window. For further technical details regarding our exact implementation (e.g. for information about bin sizes), we refer the reader to Appendix C. B. Window functions and foreground bias We now examine the statistical properties of our basic power spectrum estimator. We will find that with our basic estimator, the foregrounds appear in a wedge in k ⊥ k space, consistent with findings in the previous literature. In Figure 3, we show the expected foreground bias from Eq. (27) in terms of the cylindrical Fourier coordinates k ⊥ and k . This shows the expected level of additive foreground bias in an estimate of the power spectrum. This foreground bias is the quantity that is most directly comparable to previous studies where a single set of simulated foregrounds are propagated through a power spectrum estimation pipeline. We see that our results are consistent with such studies as well as the analytic arguments of the previous section: the foregrounds are mostly sequestered to low k modes at low values of k ⊥ , rising to higher k modes at high k ⊥ in a wedge-like pattern. As predicted by Eq. (13), the edge of the wedge is defined by a line with unit logarithmic slope. Beyond this edge, the contamination drops off sharply to give a clean EoR window. Importantly, we emphasize that the characteristic shapes seen here are not tied to the specifics of our foreground model, beyond the fact that foregrounds are spectrally smooth. Instead, the foreground wedge is tied to the chromatic nature of our instrument and the properties of our basic estimator. While this may be difficult to establish definitively with a simulation, the covariant formalism that we make use of in this paper allows the foreground models to be easily disentangled from the way they interact with the instrument and the power spectrum estimation pipeline. For example, in Figure 4, we to using the more conventional Fourier coordinates k ⊥ and k when displaying our results. For details about this conversion and our Fourier conventions, please see Appendix A. show some examples of window functions on the k ⊥ k plane. From Eq. (25), we know that these functions depend only on our choice of estimator (through E α ) and the instrument's response (through C ,β ). Thus, any signatures of the wedge that are independent of the foreground model should be apparent in the window functions. Consider first the leftmost plot from Figure 4, which shows three window functions that are all centered on the k value of 0.25 hMpc −1 , but different k ⊥ values. As one moves to higher k ⊥ , the window functions become increasingly elongated 12 in the k direction. With long tails to low k (where foreground emission naturally resides), this implies a leakage of smooth foregrounds from low to high k . Since this effect is most pronounced at high k ⊥ , the result is precisely a wedge-like structure. To guide the eye, the black lines on each plot show the edge of the wedge as predicted 13 by Eq. (13) with θ 0 = π/2. Window functions centered at higher k values (central and rightmost plots in Figure 4) also develop elongations 12 This elongation is real and not just an artifiact of our logarithmic k ⊥ k axes. We know this because the set of three windows shown in each plot of Figure 4 are chosen to be centered on the same k . 13 Our use of θ 0 = π/2 to define the edge of the wedge is somewhat arbitrary, given that there is nothing special about θ 0 = π/2 in our flat-sky approximation. However, as emphasized in Ref. [32], in a proper curved-sky treatment it is the natural scale to consider, since the primary beam of an instrument must vanish at the horizon. as one moves from low to high k ⊥ , although the effect is visually subtle due to our logarithmic plotting. These elongations are slightly less important for smooth foregrounds, as even the elongated tails are not quite long enough to reach the lowest k for window functions that are centered at high k . However, such effects may be important if foregrounds turn out to contain unsmooth (high k ) components (see Section VI for a brief discussion of this). Except for at the lowest k ⊥ values, the width of the window functions in the k ⊥ direction also appears to increase with increasing k ⊥ . Plotted on the logarithmic axes of Figure 4, it is visually clear that at intermediate to high k ⊥ the window functions have a roughly constant logarithmic width, confirming the proportional increase in width with k ⊥ that we predicted in Eq. (43). Our analytic, covariant treatment of power spectrum statistics allows us to compute window functions to the high dynamic range shown in Figure 4. This is crucial given that foregrounds are expected to be ∼ 10 10 times brighter in power (i.e., in temperature-squared units) compared to the cosmological signal. It is therefore essential to capture the low-level tails of window functions. Conveniently, once the windows have been computed, the foreground bias for a different foreground power spectrum can be easily determined using Eq. (39a). C. Error bars and error covariance In addition to the foreground bias, we may quantify the error covariance in estimates of the power spectrum. In Figure 5, we show the square-root of the diagonal elements of the error covariance matrix [Eq. (23)], i.e. the power spectrum error bars. These error bars capture more than just thermal noise errors, and include contributions from the foreground covariance. Indeed, one sees that the foreground wedge appears not just in the bias that we discussed previously, but also in the form of increased error bars. Outside the wedge, the error bars are dominated by thermal noise, and are quite low, in what constitutes the EoR window. Note that the thermal noise contribution did not appear in the bias, since we have assumed that cross-correlations have eliminated the noise bias. Towards the smallest k ⊥ , the errors increase by a small amount due to cosmic variance. This effect is typically negligible unless k is also small, but does play a small role since our model has a rather low noise level. (Again, we emphasize that our goal is not to perform a definitive sensitivity calculation). At the highest k , the errors also rise slightly. This is due to the finite spectral resolution of our instrument, which is self-consistently included in our error bars via the spectral channel profile γ in our formalism. Beyond just the error bars, our formalism also delivers the off-diagonal elements of the error covariance Σ, which encode the error correlations between different k ⊥ k cells. As emphasized in Ref. [15], these correlations need to be quantified if one wishes to accurately propagate errors from the cylindrical power spectrum P (k ⊥ , k ) to the spherical power spectrum P (k). To capture this information, we consider the correlation matrix, defined as which is essentially a whitened version of the error covariance. Examining Σ instead of Σ allows the correlations rather than the larger errors within the wedge to be the dominant feature. To further aid visualization, we focus on just small portions of the correlation matrix. Whereas the full correlation matrix would relate all k ⊥ k coordinates to all other such coordinates, in Figure 6 we fix k ⊥ at three separate values, and consider the correlations between different k coordinates. Immediately obvious is the fact that the error correlations form qualitatively different structures at different values of k ⊥ . To calibrate our expectations, we include in each plot a semi-transparent square of size ∆k × ∆k , where This is the scale over which errors are expected to be correlated in the k direction. It is derived by making the assumption that for a survey with bandwidth B band , the error correlation scale in η should be roughly 1/B band , and expressing this scale in cosmological Fourier coordinates. In the leftmost plot of Figure 6, where k ⊥ is fixed at the low value of 0.0042 hMpc −1 , we see that our rough expectations are correct. Having chosen our Fourier cell sizes with the survey volume in mind (see Appendix C for details), the cells are seen to be essentially uncorrelated. This simple picture breaks down, however, as we move towards higher k ⊥ , where the effects of the foreground wedge are more pronounced. Figure 6 shows that increasing k ⊥ (which essentially means moving deeper and deeper into the wedge) causes different k cells to become increasingly correlated. To understand why error correlations are to be expected, consider what it would take for the errors to be uncorrelated in k . Suppose one had an achromatic instrument with noise properties that were uncorrelated and uniform (i.e. white) between all frequency channels. Moving into k space by way of a Fourier transform does not induce any error correlations, because uncorrelated white noise has the same statistical properties in all bases. On the other hand, for our measurement we have an inherently chromatic instrument, which makes our noise chromatic. Fourier transforming non-white noise will result in noise correlations, even if the noise was uncorrelated to begin with. Additionally, interferometers are chromatic in a very specific way, with longer baselines more chromatic (as illustrated in Figure 2). Since higher k ⊥ are probed by longer baselines, error correlations should increase with k ⊥ , as seen in Figure 6. Viewed together, Figures 5 and 6 suggest that the characteristic k correlation scale coincides roughly with the k extent of the wedge. This suggests that the instrumental effects that caused the wedge have also decreased the number of independent measurements, thus decreasing overall signal-to-noise. (Note, however, that the correlated errors persist even at high k . Though somewhat challenging to see with our logarithmic axes and hybrid binning, one sees that even there the off-diagonal k correlations are greater for large k ⊥ modes). That there is a rough matching of scales between the wedge and our error correlation scale is not entirely surprising (though not a given in the a priori sense 14 ), since that window functions and covariances are closely related to one another. Recall that our window functions exhibited long elongations in k as we moved towards higher k ⊥ . With extremely broad window functions, our instrument essentially smoothed over a large number of modes, and it is unsurprising that the errors in nearby bins ended up being positively correlated. We can quantify the error correlations in more detail by defining an effective number of independent cells N eff , where with N c being the number of Fourier cells that enter into a simple unweighted averaging of Fourier modes. By con- Effective number of independent cells N eff as a function of Nc, the number of Fourier cells included in an averaging of k modes (going from low to high k ). The approximately linear relationship seen for the low k ⊥ curve is indicative of uncorrelated modes. On the other hand, the high k ⊥ curve approaches linearity only when high k modes dominate the average. Its initially slow increase at low k shows that the modes are highly correlated. Thus, for power spectrum sensitivity calculations to be reliable, one must take error correlations into account or risk overestimating an instrument's sensitivity. struction, if all the modes that one averages over are independent, N eff equals N c , whereas N eff = 1 if the modes are perfectly correlated. In Figure 7, we consider the effect of averaging Fourier modes along the k axis and show N eff as a function of N c , with N c increasing from unity (when only the lowest k mode is included in the average) to N c = 30 (the total number of k cells in our computation). We do this for two different constant k ⊥ slices on the Fourier plane, k ⊥ = 0.0042 hMpc −1 and 0.13 hMpc −1 (corresponding to the leftmost and rightmost plots of Figure 6, respectively). For low k ⊥ , one sees the linear relationship N eff ≈ N c regardless of how many k cells are included in the averaging. For high k ⊥ , however, N eff increases only very slowly at first, as one averages together the highly-correlated modes within the wedge. The increase in N eff is linear only at higher k , where our hybrid binning becomes logarithmic, and each cell encompasses a greater extent in k , eventually exceeding the error correlation length. In our formalism, the correlated errors within each of these larger cells have already been self-consistently averaged over, giving inde-pendent cells that contribute linearly to N eff . Regardless of the specific of one's power spectrum estimation formalism, it is crucial to take into account error correlations if averaging together bins that are narrower than the error correlation length, or if the bins are wider than this, to ensure that the implicit averaging performed within the bin is done correctly [rather than relying on possibly incorrect a priori assumptions such as that implied by Eq. (53)]. For the particular computational set-up in this paper, one sees about a factor of 2 reduction in N eff for the highest k ⊥ . Going to even higher k ⊥ causes even greater reductions in sensitivity (compared to simple expectations). The correlations discussed here are particularly important for experiments proposing to make measurements deep within the wedge. As discussed above, the extent of the wedge in k provides the characteristic error correlation length between different k cells. It therefore follows that errors are highly correlated whenever one chooses to work within the wedge. Though such measurements may be well-motivated by the fact that the cosmological-signal-to-thermal-noise ratio is largest at low k, previous studies that established this have typically assumed the correlation length given by Eq. (53). The resulting signal-to-noise ratios may therefore have been over-estimated. We emphasize that the error correlations seen in this section exist whether or not one's power spectrum estimation pipeline includes a direct subtraction of modeled foregrounds from the input data. To see this, suppose a direct foreground subtraction scheme reduces the foreground covariance by some constant multiplicative constant, so that C fg → εC fg where 0 < ε ≤ 1. If one assumes that thermal noise is negligible compared to foreground residuals at low k after a long time-integration, the result of our reduced C fg will be a corresponding decrease in the amplitude of the final bias and the error bars. However, the plots of error correlation in Figure 6 will remain unchanged, since the correlation is insensitive to an overall scaling. The number of independent modes will therefore still decrease in the manner discussed above, reducing sensitivity. Since this loss of sensitivity is the most pronounced at high k ⊥ , it is particularly important to take into account for arrays that make use of long baselines, such as LOFAR or GMRT. VI. UNSMOOTH FOREGROUNDS? In many prior works on 21 cm cosmology, the assumption of spectrally smooth foregrounds is considered crucial to one's ability to perform foreground subtraction. At various points in this paper, we too have assumed that foregrounds are smooth, and incorporating this assumption into our general framework gave rise to various predictions, such as the existence of the EoR window. However, since the power spectrum estimation framework itself does not require smooth foregrounds, a number of our key results would survive a (hypothetical) discovery of unsmooth foreground sources. We now briefly discuss how the problem of foreground mitigation would change if such an unfortunate discovery were to be made. Because the window functions encode only the mapping of the true power spectrum to the estimated power spectrum and do not depend on the actual power spectrum, they do not rely on the assumption of smoothness. Irrespective of whether the foregrounds are smooth or not, the window functions accurately describe how foreground power is smeared out on the k ⊥ k plane by our instrument. If the foregrounds are smooth, their influence is limited to the wedge. This makes foreground mitigation easy, as their avoidance requires no more than a simple cut on the Fourier plane. If the foregrounds are not as smooth as expected, the EoR window will be smaller, but its exact size can still be predicted by convolving the (now unsmooth) model of our foregrounds with the same window functions as before. Forthcoming data from various experiments at higher sensitivity will allow further foreground modeling, and-with the help of the window functions-an accurate determination of the extent of the foreground wedge. Encouragingly, recent theoretical calculations have shown that in physically-motivated models of synchrotron emission, foreground spectra tend to be smooth even under the most pessimistic of assumptions [60]. For the sake of argument, however, let us consider a worst-case scenario where foregrounds are discovered to be sufficiently unsmooth for the EoR window to be drastically reduced in size. In such a scenario, a number of strategies can be employed for foreground subtraction. First, foregrounds can be modeled and subtracted to the best of one's ability in the visibility data. Following that, a more sophisticated estimator (one that downweights the data not by N but by the total covariance C to account for uncertainty in the foregrounds) can be used. Finally, the foreground bias can be subtracted from the power spectrum using Eq. (39a), and window function decorrelation techniques can also be used in an attempt to increase the size of the EoR window. We explore a number of these techniques in Paper II [46]. VII. CONCLUSIONS In any measurement of the redshifted 21 cm power spectrum, foreground contamination is a serious concern. Fortunately, observations and various theoretical studies have shown that despite complications arising from the inherently chromatic nature of an interferometric measurement, smooth spectrum foregrounds occupy a characteristic wedge region in cylindrical k ⊥ k Fourier space. The complement of this region is expected to be relatively foreground-free, forming an EoR window where measurements might be made. While there exists an extensive literature on the topic, previous studies have typically focused on how the fore-ground wedge manifests itself in the mean power spectrum signal. However, the same physical effects that cause the wedge in the power spectrum also affect the associated error statistics, such as the error covariance and the window functions. An examination of some of these statistics was performed in Ref. [42] using Monte Carlo methods. In this paper, we have provided a complementary treatment by deriving a rigorous, fully-covariant mathematical description of the foreground wedge and the EoR window. While our methods require the numerical evaluation of some matrix expressions, they differ from previous work in that they do not require numerical simulations of interferometric measurements, since the underlying framework is largely analytic. This makes it possible to compute error statistics with very high dynamic range, which is crucial since the foregrounds are expected to dwarf both the instrumental noise and cosmological signal. Our formalism takes advantage of the delay spectrum techniques introduced in Ref. [32] to achieve computational savings, and in fact it is the use of the delay basis that makes our covariant, high dynamic range calculations numerically feasible. However, we re-emphasize that this is merely a choice of basis, and that our results are independent of this choice. This was shown explicitly in Section IV, when we developed a description of the foreground wedge in terms of window functions. Our description decouples the causes of the wedge-which depend only on the chromatic nature of the instrument and the specific form of our power spectrum estimator-from the detailed nature of the foreground emission. Independent of foreground properties, window functions that are centered at high k ⊥ will typically develop long tails towards low k . The wedge then results from the additional assumption that foregrounds are spectrally smooth, so that strong signals from low k are transferred to higher k by the long tails. Once the window functions have been computed, however, our formalism allows such assumptions to be relaxed. With a fully covariant framework, we are able to track all error correlations in our numerical computations. We find that measurements made at high k ⊥ have highly correlated errors, effectively reducing the number of independent measurements that can be made in that part of Fourier space. This is particularly important for sensitivity forecasts that rely heavily on measurements made within the wedge, since the wedge's extent in Fourier space is roughly on the same scale as that of the error correlations. Previous studies have typically neglected error correlations, assuming that errors are independent as long as the spatial Fourier cells are of the same size as an antenna's uv footprint, and the spectral Fourier cells are on the order of 1/B band . Our work suggests that this is likely to be too optimistic an assumption. At the highest k ⊥ considered in our numerical computations (k ⊥ = 0.13 hMpc −1 ), for example, error correlations reduce the number of independent modes by approximately a factor of 2. This effect will be even more pronounced at even higher k ⊥ , which are probed by experiments with extremely long baselines. Since the chromatic effects that caused the wedge are closely related to those that cause error correlations, it will be crucial in future research to address the question of exactly how far the wedge can be pushed back (or equivalently, how much one can expand the EoR window). In Paper II, we use the formalism of this paper to explore statistical methods for enlarging the EoR window [46]. In this paper, our goal was to provide a rigorous treatment of the wedge. Previous treatments have typically made different simplifying assumptions. These include neglecting partially redundant baselines, approximating delay modes as η modes, making assumptions about baseline length, assuming top-hat primary beams, neglecting binning artifacts, or assuming that errors are uncorrelated on the Fourier plane. Our framework discards all of these approximations simultaneously, and it is gratifying to see that the basic picture of the EoR window as a naturally foreground-free region of Fourier space remains unchanged. This bodes well for foreground avoidance efforts that aim to detect the EoR by working outside the wedge, making it possible for 21 cm cosmology to open a new window into the high redshift universe using only existing data analysis techniques, with even more transformative results possible with further advances that expand the EoR window. where c is the speed of light, z is the redshift of observation, H 0 is the Hubble parameter, Ω m is the normalized matter density, and Ω Λ is the normalized dark energy density. The line-of-sight direction is more subtle. When forming a power spectrum, one typically includes only data from a relatively narrow range in redshift. Otherwise, cosmological evolution invalidates the assumption of a translation-invariant temperature field, which is needed in the definition of a power spectrum. What matters, then, is not the mapping between frequency and the total comoving line-of-sight distance, but rather, the local relation between differences in frequency ∆ν and differences in distance ∆r at the redshift of observation: where ν 21 ≡ 1420 MHz is the rest frequency of the 21 cm line. Having established this mapping we may (in what is perhaps an abuse of notation) recenter our coordinates so that ∆ν → ν and ∆r → r . Such a re-centering introduces a constant phase shift in our Fourier transforms, which has no bearing on quadratic statistics such as the power spectrum. Making the identification between T (r ⊥ , r ) and I(θ, ν) using our mappings, we may compare Eqs. (A1) and (A4) to conclude that This allows the power spectra defined under the different Fourier conventions to be related to one another: In this paper, we will plot all numerical results in terms of cosmological Fourier coordinates k ⊥ and k , even though we perform all our computations in terms of u and η. We use WMAP9 cosmological parameters: Ω m = 0.28, Ω Λ = 0.72, and H 0 = 69.7 km/s Mpc [1]. thereby proving that the basic power spectrum estimator that we examined in Section IV is equivalent to one where one forms a uvη space dirty map from visibilities, and then squares the result. We stress that the proof that we have just presented is basis-independent, in the sense that our data vector need not be indexed by baseline and delay. For example, one may choose to deal with frequency spectra rather than delay spectra, in which case the data vector would be indexed by baseline and frequency channel. The resulting h α vectors would no longer be given by Eq. (17), but the proof shown here would be unchanged. Crucially, the proof shown here assumed an infinitelyfinely discretized Fourier space. In practice, this will only be a good approximation on small scales (large Fourier wavenumbers), where the difference in wavenumber ∆k between neighboring discretized bins is small compared to the magnitudes of the wavenumbers k themselves. In Paper II we will formalize this assumption by importing the Feldman-Kaiser-Peacock approximation that is commonly used in galaxy surveys. Importantly, we emphasize that while squaring a normalized uvη dirty map is a perfectly reasonable way to estimate a power spectrum, it is by no means optimal. Indeed, Eqs. (B4) and (B7) are provably non-optimal, and better estimators are explored in Paper II. Instead, the estimators considered here (and therefore in Section IV) are intended to be representative of simple, "first pass" methods [43,44,53], and their statistical properties provide basic pictures of the challenges that one faces. Appendix C: Technical computational details In this Appendix, we provide further details pertaining to the numerical computations described in Section V. Binning In the quadratic estimator formalism used in this paper, there are two sets of discretizations: a discretization of the input data and a discretization of the output power spectra. For all the computations performed here, we have chosen to express the input data in a basis parameterized by baselines and delay. In other words, each component of the input data vector x corresponds to a different baseline and delay pair. For computational tractability, we bin baselines together into 50 linear bins of width 5 m, with the first bin centered about 10 m and the last bin centered about 255 m. (There are in fact some slightly longer baselines in our array. However, we discard them in our analysis for reasons of numerical stability). The delay bins are again linear, and range from −200 µs to 198.75 µs in 320 bins. This gives delay increments of 0.125 µs, equal to the natural bin size of 1/B band . Note that this equality is by no means a requirement. If computational resources are not a concern, it may be preferable to slightly over-resolve. This is perfectly legitimate despite the fact that the bins will no longer be independent, since the inclusion of the channel profile γ and our tracking of full covariance information allows the nonindependence to be self-consistently captured. Given that a baseline of length b roughly probes modes with u ∼ bν 0 /c, and that a delay bin τ roughly probes η ∼ τ , a logical way to discretize the output uη space would be to use linear bins that matched the input bins (but with u bins scaled by an appropriate factor of ν 0 /c). Such a scheme would be the most appropriate for matching the specifications of the instrument. However, the cosmological power spectrum is expected to evolve on logarithmic k scales. Thus, a linear binning is computationally wasteful at high k, where a large number of bins are used to resolve a power spectrum that does not evolve very much. On the other hand, a logarithmic binning scheme that is appropriate at high k will tend to be computationally wasteful at low k, where one would be over-resolving the instrumental response. As a compromise, we use a hybrid binning scheme that is roughly linear at low k and roughly logarithmic at high k. In this scheme, the (n + 1)th boundary of the u bins u n+1 is given by u n+1 = 1.036u n + 2.5. (C1) Similarly, the (n + 1)th boundary of the η bins η n+1 is given by η n+1 = 1.095η n + 0.125 µs. At low u and low η the additive terms dominate, yielding bin boundaries that are spaced in an approximately linear fashion well-suited to the instrumental specifications. At high u and η the multiplicative terms dominate, giving logarithmic bins that are a good fit for theoretical expectations. For both u and η, we use 30 bins, giving a total of 900 uη bandpowers. The bottom edge of the lowest u bin is at u = 3, while the bottom edge of the lowest η bin is at η = 0.12 µs. Sparseness and computational shortcuts The methods and computations presented in this paper are basis-independent. By this, we mean that while our final goal is to estimate a power spectrum (and its associated error statistics) on the k ⊥ k plane, our input data may be expressed in any basis that we find convenient. We now elaborate on our reasons for working in a baseline/delay basis. As an example, consider the evaluation of C. With the Gaussian beams and tapering functions used in Section V, the covariance C is given by Eq. (45). For parts of the matrix corresponding to short baselines, one can see by inspection that the matrix will be diagonal-dominant, with a large number of off-diagonal elements that are close to zero. The C ,α matrices are even more sparse, since many of the diagonal elements (those that do not satisfy u ≈ ν 0 b i /c or η ≈ τ i ) will also be zero. In our computations, we skip the evaluation of matrix elements that are expected to be small. This represents significant savings in computation time, given that with our binning scheme each matrix measures 16, 000 × 16, 000, and each element requires numerically integrating a twodimensional integral given by Eq. (45). Moreover, with 900 bandpowers, this process must be repeated 900 times for each of the C ,α matrices. In our implementation, we set off-diagonal matrix elements of C to zero if the integrand is suppressed by at least 10 −12 relative to the relevant diagonal elements everywhere over the integration volume. For the C ,α matrices, we apply the additional constraint that a diagonal element is to be skipped if the integrand is attenuated by 10 −12 or more compared to a different diagonal element that satisfies u α ≈ ν 0 b/c and η α ≈ τ , where u α and η α are the uη values corresponding to the αth band. The sparseness that we have described here is a direct result of our using a baseline/delay basis. In contrast, parameterizing the spectral information in a frequency basis results in substantially denser matrices, since the data are highly correlated between frequencies [42]. The delay transform roughly isolates spectral information by η mode. This isolation is imperfect in the long baseline limit, as we saw in Eq. (10). The matrices are therefore still dense for elements corresponding to long baselines, but the sparseness that is available with short baselines provides enough savings to enable the full propagation of covariant information without resorting to Monte Carlo methods, which can sometimes be slow to converge to the dynamic range displayed in Section V. Finally, we note that in an application of our methods to real measurements, the delay transform operates individually on each baseline [32], and therefore can be applied with negligible computational cost to the input data.
Ada protein– and sequence context–dependent mutagenesis of alkyl phosphotriester lesions in Escherichia coli cells Alkyl phosphotriester (alkyl-PTE) lesions are frequently induced in DNA and are resistant to repair. Here, we synthesized and characterized methyl (Me)- and n-butyl (nBu)-PTEs in two diastereomeric configurations (Sp and Rp) at six different flanking dinucleotide sites, i.e. XT and TX (X = A, C, or G), and assessed how these lesions impact DNA replication in Escherichia coli cells. When single-stranded vectors contained an Sp-Me-PTE in the sequence contexts of 5′-AT-3′, 5′-CT-3′, or 5′-GT-3′, DNA replication was highly efficient and the replication products for all three sequence contexts contained 85–90% AT and 5–10% TG. Thus, the replication outcome was largely independent of the identity of the 5′ nucleotide adjacent to an Sp-Me-PTE. Furthermore, replication across these lesions was not dependent on the activities of DNA polymerases II, IV, or V; Ada, a protein involved in adaptive response and repair of Sp-Me-PTE in E. coli, however, was essential for the generation of the mutagenic products. Additionally, the Rp diastereomer of Me-PTEs at XT sites and both diastereomers of Me-PTEs at TX sites exhibited error-free replication bypass. Moreover, Sp-nBu-PTEs at XT sites did not strongly impede DNA replication, and other nBu-PTEs displayed moderate blockage effects, with none of them being mutagenic. Taken together, these findings provide in-depth understanding of how alkyl-PTE lesions are recognized by the DNA replication machinery in prokaryotic cells and reveal that Ada contributes to mutagenesis of Sp-Me-PTEs in E. coli. The specific sequence of DNA within an organism imparts the genetic code for all domains of life; however, this code is susceptible to alterations because of limited chemical stability of DNA (1). As a result, the genetic integrity of DNA can be compromised by endogenous metabolites and exogenous chemicals, resulting in different types of damage (1,2). Alkylation is a major type of DNA damage (3), and the cytotoxic effects of DNA alkylation adducts are manifested by the fact that DNA alkylation also constitutes the central mechanism of action for many widely prescribed chemotherapeutic agents (2). Nucleobase modifications have been the major focus of DNA alkylation studies (3), although alkyl phosphotriesters (alkyl-PTEs) can also be efficiently induced (4). The latter lesions are formed when alkylating agents con-jugate with one of the two non-carbon-bonded oxygen atoms and, based on which of these oxygen atoms is attacked, alkyl-PTEs can form in S p or R p configuration ( Fig. 1) (4). It was also shown that chronic exposure of rats to 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone, a major tobacco-specific Nnitrosamine, through drinking water induces the formation of Me-PTE lesions in lung tissues (5). There have been some studies about the repair and biological consequences of alkyl-PTEs (4). Because the addition of an alkyl group to the backbone phosphate neutralizes its negative charge, the presence of alkyl-PTE lesions in DNA may perturb its interactions with proteins. For instance, a mixture of S p -and R p -ethyl-PTE impedes in vitro primer extension catalyzed by T4 DNA polymerase and Escherichia coli DNA polymerase I (6). Isopropyl-PTE was also found to inhibit the unwinding of duplex DNA mediated by superfamily 2 DNA helicases (6). Our previous study showed that the two diastereomers of alkyl-PTEs exhibited different replication bypass efficiencies in E. coli cells, where replication across S p -Me-PTE at TT dinucleotide site is mutagenic, and intriguingly the mutagenicity of the lesion requires the presence of Ada protein (7). Previous studies showed that the frequencies for the formation of alkyl-PTEs are influenced by flanking sequences. For instance, Guichard et al. (8) employed N-nitrosodiethylamine to treat three strains of mice, detected the levels of Et-PTE products by 32 P-postlabeling, and observed higher frequencies of Et-PTE lesions with the 59-flanking nucleoside being a thymidine or 29-deoxyguanosine compared with a 29-deoxyadenosine or 29-deoxycytidine. Likewise, the relative frequencies of 59-nucleobases at PTE sites exhibited nonrandom distribution in calf thymus DNA and liver DNA of BALB/c mice treated with diethyl sulfate (9). LC-MS/MS results also revealed the effects of flanking sequences on the levels of Me-PTE lesions induced in lung tissues of rats treated with 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone and its metabolite, 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanol (5). Several models were proposed to rationalize the sequence-dependent accumulation of alkyl-PTE lesions: 1) Other than the non-carbon-bound oxygen atoms on backbone phosphate groups, each nucleobase possesses unique nucleophiles with different reactivities toward alkylating agents; 2) the alkylating agents' electrophilicity may also influence the nonrandom distribution of alkyl-PTE lesions, where highly reactive alkylating agents may confer a more random distribution (4); or 3) the repair efficiencies for This article contains supporting information. * For correspondence: Yinsheng Wang, Yinsheng.Wang@ucr.edu. alkyl-PTEs may vary with flanking base sequences. However, not much is known about how the flanking base sequence contexts modulate the biological consequences of alkyl-PTEs. Here, we employed a shuttle vector-based method, in conjunction with LC-MS (10,11), to analyze how the flanking nucleobases of Me-and nBu-PTE lesions influence the fidelity and efficiency of DNA replication in E. coli cells, and how replication past alkyl-PTEs is modulated by Ada protein and translesion synthesis DNA polymerases. Considering that we have previously investigated the TT sequence (7), here we examined the Me-and nBu-PTE lesions in both S p and R p configurations and in six different combinations of flanking sequences (TX and XT, with X being A, C, or G) (Fig. 1). The Me-and nBu-PTE lesions were selected to examine the influence that the size of the alkyl group has on the replicative bypass of the alkyl-PTE lesions. Results The aim of the present study was to gain a comprehensive understanding about the impact flanking base sequences have on DNA replication past alkyl-PTE lesions in E. coli, and to examine the roles of Ada protein and translesion synthesis DNA polymerases in modulating the replicative bypass of these lesions. We synthesized 12-mer oligodeoxyribonucleotides (ODNs) containing a site-specifically inserted alkyl-PTE lesion in different sequence contexts following our recently published procedures (7), except that the exocyclic amino groups of adenine, cytosine, and guanine bases in the phosphoramidite building blocks were protected (see "Materials and Methods" in supporting information, Fig. 2, Figs. S1-S4). The synthesized ODNs were purified by HPLC (Fig. S5). Because the S p and R p diastereomers of the T(Me)C-and T(nBu)A-containing ODNs cannot be resolved from each other by HPLC, a mixture of the two diastereomers was utilized for these two ODNs in the subsequent experiments. We characterized the ODNs using electrospray ionization-MS (ESI-MS) and tandem MS (MS/MS), and the results confirmed the expected site of alkyl-PTE incorporation and the sequences of the modified ODNs (Figs. S6-S17). We employed a previously reported shuttle vector method to assess the bypass efficiencies and mutation frequencies of the alkyl-PTE lesions (12). The lesion-containing ODNs were ligated into single-stranded M13 phage (Fig. S18). After replication, progeny recovery, PCR amplification, and restriction enzyme digestion, the liberated ODNs were analyzed by native PAGE and LC-MS/MS to identify the replication products ( Fig. 3 and Figs. S19-S30). As shown in Fig. 3, we employed two restriction enzymes, BbsI and MluCI, to digest the PCR products of the progeny genome, resulting in the release of the initial damage-containing region as 10-mer ODNs for the lesioncontaining or control genome, or a 13-mer ODN for the corresponding region in the competitor genome. By switching the order of digestion of the two restriction enzymes, we selectively radiolabeled the 59-terminus of either the original lesion-situated strand (p*GGCMNGCTAT) or the opposite strand (p*AATTATAGCY), where p* designates the 32 P-labeled phosphate group. Effects of alkyl-PTE lesions on the fidelity of DNA replication in E. coli cells Replication across the alkyl-PTEs may yield up to 16 potential products (i.e. with the four natural nucleotides being incorporated at the two nucleosides flanking the PTE site), some of which could not be resolved from one another by PAGE analysis. Thus, we employed a restriction endonuclease and MS assay to identify the mutagenic products and to quantify the mutation frequencies (10). In this vein, the aforementioned digestion products were subjected to LC-MS and MS/MS analyses, where we monitored the fragmentation of the [M-3H] 32 ions. The mutation frequencies were quantified by the calibrated ratios of peak areas found in the selected-ion chromatograms for the mutagenic and nonmutagenic products ( Fig. 4b and Figs. S21 and S22). We found that intriguingly the replication products of the S p -Me-PTEs at the three XT sites (X = A, C, or G) are largely independent of the neighboring 59 nucleosides, with ;85-90% and ;5-10% of products carrying AT and TG at bases flanking the initial damage site (XT), respectively, although replication across C(Me)T also yields 7% of nonmutagenic replication product ( Fig. 4b and Figs. S19 and S20). Additionally, none of the R p diastereomers of the Me-PTEs in the XT sequences, none of Me-PTEs in the TX sequences, and none of the nBu-PTEs in the TX or XT sequence are mutagenic (Figs. S19, S23, S29, and S30). Impacts of alkyl-PTE lesions on the efficiency of DNA replication in E. coli cells The bypass efficiencies of the Me-PTE lesions were quantified by comparing the relative intensity of signal for the 10-mer products from the lesion-containing or control genome to that of the 13-mer replication product from the competitor genome; this value was adjusted based on the molar ratios of lesion/competitor and control/competitor genomes used for the transfection (Fig. 4a). For Me-PTEs, none of the S p -PTEs at the three XT sites were strong impediments to DNA replication, although the corresponding R p diastereomers significantly blocked DNA replication. Additionally, the blockage effects were more pronounced when the flanking 59-nucleobase was a purine (adenine and guanine). Meanwhile, all Me-PTEs at TX sites elicited moderate blockage effects on DNA replication. We also examined whether SOS-induced DNA polymerases (Pol II, Pol IV, and Pol V) promote the replicative bypass of different Me-PTEs. Our results showed that, similar to the results obtained for the alkyl-PTE lesions at TT site (7), simultaneous depletion of all three SOS-induced DNA polymerases did not exert any apparent effects on the replication bypass efficiencies for the Me-PTE lesions (Fig. 4a). For nBu-PTEs, we found that none of the S p -PTEs at XT sites significantly impede DNA replication in E. coli cells, whereas the R p diastereomer in the XT sequence and both diastereomers of nBu-PTEs in the TX sequence elicit moderate blockage effects (Fig. 4c). Impact of Ada protein on the efficiency and fidelity of replication across alkyl-PTE lesions in E. coli cells Considering our previous observation that Ada protein can influence the mutagenicity and cytotoxicity of S p -Me-PTE at TT dinucleotide site (7), we next asked how Ada protein affects the replication bypass efficiencies and mutation patterns of S p -Me-PTEs in different flanking base sequences ( Fig. 5 and Figs. S25-S27). We found that genetic ablation of the ada gene resulted in a moderate decline in bypass efficiencies for the S p -Me-PTEs in all three XT sequences, and a complete abrogation of mutations. However, depletion of ada gene did not elicit significant changes in efficiency or accuracy in replication across the S p -Me-PTE in any of the three TX sequences. Discussion We previously investigated how size and stereochemistry of the alkyl-PTEs at TT site influence DNA replication in E. coli (7). Here, we systematically investigated how compositions of the flanking nucleobases of the alkyl-PTE lesions affect DNA replication. We found that replication across Me-PTEs in XT sequences (X = A, C, or G) shares common features as what we observed for the replicative bypass of T(Me)T (7). First, we demonstrated that neither the replication bypass efficiency nor the mutation frequency of Me-PTEs was impacted by concurrent ablation of the three SOS-induced DNA polymerases. Second, we revealed that none of S p -Me-PTEs in the three XT sequences suppress DNA replication in E. coli cells, whereas the R p -Me-PTEs in these sequence contexts exhibit significant replication blockage effects. Third, we show that S p -Me-PTEs at XT sites are mutagenic and Ada protein is indispensable for the mutagenic bypass. In this vein, removal of Ada protein also resulted in decreases in bypass efficiencies for the S p -Me-PTEs in the three XT sequences. We also uncovered some unique features regarding replicating past alkyl-PTEs with different flanking base sequences. Strikingly, we found that the distribution of replication products for S p -Me-PTEs in the three XT sequences (X = A, C, or G) was largely independent of the 59-neighboring nucleobase being an A, C, or G, where ;85-90% and 5-10% of the replication products were with AT and TG being inserted at the initial XT site, respectively (Fig. 4b). In this context, it is worth noting that S p -Me-PTE at TT site induced 50% TT!GT and 15% TT!GC mutations, and the induction of these mutations also entails Ada protein (7). Ada was reported to remove the methyl group from S p -Me-PTEs, where Cys-38 on N-terminal domain of the protein interacts with the methyl group on the DNA backbone (13,14). A previous structural study showed that the N-terminal domain of Ada protein preferentially recognizes A/T in the Ada box (13,15). In this recognition, Arg45 participates in hydrogen bonding interactions with thymine residues in both strands, and the binding with Arg-45 is sterically incompatible with a G positioned in the last two bases of the Ada box (13,15), which is consistent with the sequence specificity observed for replication past alkyl-PTEs. In contrast to what we found for Me-PTEs at XT sites (X = A, C, G, or T), replication across the Me-PTEs at TX sites (X = A, C, or G) was accurate. In addition, neither the efficiency nor fidelity of replication across any of the Me-PTEs at TX site was altered upon genetic ablation of Ada. We also examined the replication past nBu-PTEs and found that the trends in bypass efficiency were similar to what we found for Me-PTEs (Fig. 4c); however, none of nBu-PTEs were mutagenic. The results from our replication studies prompted us to propose a tentative model where Ada protein binds to S p -Me-PTE lesions formed at XT dinucleotide sites, and this binding is maintained during the replicative bypass of these lesions in ssDNA. In this vein, the very high levels of AT (85-90%) and GT (5-10%) replication products induced at the S p -Me-PTEcontaining XT (X = A, C, or G) dinucleotide sites, together with the dependence of these product distributions on Ada, suggest that Ada protein is bound with the lesion during replicative bypass of the S p -Me-PTE at these sites (Fig. 4b). Moreover, this interaction with Ada protein also assists the replicative bypass of these lesions in E. coli cells. The lack of apparent dependence of the distributions of replication products for the S p -Me-PTE lesion in the three XT sequences (X = A, C, or G) on the identities of the 59-flanking nucleobases strongly suggests their lack of recognition by DNA polymerases during nucleotide incorporation at the site. These results, together with the previous observation that an arginine residue in REV1 could direct the incorporation of dCMP through direct hydrogen bonding interaction with the nucleobase in the incoming nucleotide (16), indicate that some amino acid residue(s) in the Ada protein may direct nucleotide incorporation when E. coli replication machinery incorporates a nucleotide opposite the 59 flanking base of the Me-PTE lesions. Harper and Lee (17) analyzed the mutations induced by Nmethyl-N9-nitro-N-nitrosoguanidine in 16 different strains of E. coli, and they found that 96.6% of the 4099 detected mutations were G!A transition mutation, which were attributed to O 6 -MedG (18,19). Our results, however, suggest that the G!A mutation may also arise, in part, from S p -Me-PTE formed at GT site. In summary, the results from our shuttle vector-based replication study showed that flanking base sequences play important roles in DNA replication across Me-and nBu-PTE lesions. S p -Me-PTEs and S p -nBu-PTEs at XT sites (X = A, C, or G) were not strong impediments to DNA replication, whereas their R p counterparts exhibited blockage effects. Meanwhile, Me-and nBu-PTEs at TX sites moderately block DNA replication in E. coli. Furthermore, replication across S p -Me-PTEs at XT sites is mutagenic, which requires the presence of Ada protein. However, this phenomenon was not observed for R p -Me-PTEs at XT sites, which is consistent with the notion that Ada protein does not recognize R p -Me-PTEs. There are two established functions for Ada protein, i.e. removal of the methyl group from S p -Me-PTE or O 6 -MedG, and transcriptional activation of ada, alkA, alkB, and aidB genes (3,20,21). The N-terminal domain of Ada is required for its functions in the repair of S p -Me-PTE and transcriptional regulation (13,22), whereas the Cterminal domain of Ada is necessary for the repair of O 6 -MedG (23). Our work suggests that, aside from these two well- characterized functions, Ada may assume other important functions in cells, i.e. by binding and modulating the replicative bypass of S p -Me-PTE lesions in some sequence contexts. In the future, it will be important to investigate whether a mutant form of Ada protein that is competent in binding with S p -Me-PTE, but deficient in removal of the methyl group from the lesion (e.g. a Cys-38 mutant), can still support the replicative bypass of the lesion, and if so, whether this role of Ada is modulated by the flanking nucleobases of S p -Me-PTE. Additionally, future studies about how alkyl-PTE lesions influence DNA replication in mammalian cells will also provide additional insights into the biological impacts of this unique class of DNA damage. Chemical syntheses The detailed materials, synthetic procedures, reaction yields, and spectroscopic characterizations of compounds are provided in the supporting information, and the NMR spectra for these compounds are shown in Figs. S1-S4. ODN synthesis A Beckman Oligo 1000M DNA synthesizer (Fullerton, CA, USA) was used to synthesize the 12-mer lesion-containing ODNs, 59-ATGGCX(Y)TGCTAT-39 and 59-ATGGCT(Y)X-GCTAT-39 (X represents A, C, or G; Y represents an Me or nBu group) at 1 mmol scale. The synthesized phosphoramidite building block was dissolved in anhydrous acetonitrile at a concentration of 67 mM. Incorporation of unmodified nucleotides was conducted by using commercially available ultramild phos-phoramidite building blocks (Glen Research Inc., Sterling, VA, USA) following standard protocols. Synthesized ODNs were cleaved and deprotected from controlled pore glass with concentrated ammonium hydroxide at room temperature for 55 min. After solvent removal using a SpeedVac, the solid residues were dissolved in water and HPLC purified. HPLC HPLC separation was conducted on an Agilent 1100 system with a Synergi Fusion-RP column (10 3 150 mm, 4 mM in particle size and 80 Å in pore size; Phenomenex Inc., Torrance, CA, USA). Triethylammonium acetate (TEAA) solution (50 mM, pH 6.8), and a mixture of 50 mM TEAA and acetonitrile (70:30, v/v) were employed as mobile phases A and B, respectively. The gradient profile was 5-30% B in 5 min and 30-60% B in 70 min, and the flow rate was 0.8 ml/min. The HPLC traces for the purification of the 12-mer lesion-containing ODNs are shown in Fig. S5 and their ESI-MS and MS/MS are provided in Figs. S6-S17. The single-stranded lesion-containing and lesion-free competitor M13 genomes were prepared following published procedures (Fig. S18) (12). Briefly, 20 pmol of M13mp7 (L2) plasmid was digested with 40 units EcoRI-HF at 25°C for 8 h to linearize the vector. Two scaffolds, 59-CTTCCACTCACT-GAATCATGGTCATAGCTTTC-39 and 59-AAAACGACG-GCCAGTGAATTATAGC-39 (25 pmol each), were subsequently annealed with the linearized vector. A phosphorylated 22-mer lesion-containing or lesion-free control ODN, or a 25mer competitor ODN, was then added to the mixture and incubated with scaffold ODNs and T4 DNA ligase at 16°C for 8 h. Unligated linear vector and ODNs were removed by the exonuclease activity of T4 DNA polymerase (22.5 units, 16°C for 2 h). The resulting plasmids were purified using Cycle Pure Kit (Omega), and the purified lesion-containing and lesion-free control plasmids were subsequently normalized against the competitor plasmid (12). Preparation of Ada-deficient AB1157 E. coli strain P1 transduction was employed to obtain the Ada-deficient E. coli strain (Dada::kan) in AB1157 background from JW2201-1 strain (26). Genotype of the ensuing deficient strain was confirmed by antibiotic resistance and PCR followed by sequencing. Transfection of M13 genomes into E. coli cells The lesion-free control or lesion-containing plasmids were mixed with the competitor plasmid at a 1:1 molar ratio. The mixtures were transfected into electrocompetent WT AB1157 E. coli strains as well as the isogenic cells deficient in Ada, or all three SOS-induced DNA polymerases (i.e. Pol II, Pol IV and Pol V, TKO) (11). The transfected E. coli cells were cultured at 37°C for 5.5 h and the M13 phage was isolated from the supernatant by centrifugation at 13,200 rpm for 5 min. The purified M13 phage was transfected into SCS110 E. coli cells for amplification, followed by extraction with QIAprep Spin M13 kit (Qiagen) to obtain M13 ssDNA template for PCR amplification. Quantification of bypass efficiency by the competitive replication and adduct bypass assay We utilized a modified version of the competitive replication and adduct bypass assay to assess the bypass efficiency of alkyl-PTE lesions in E. coli cells (10)(11)(12)19). The regions of interest in the progeny M13 genomes were amplified by PCR with the use of Phusion high-fidelity DNA polymerase. The primers were 59-YCAGCTATGACCATGATTCAGTGAGTGGA-39 and 59-YTCGGTGCGGGCCTCTTCGCTATTAC-39, where Y represents a 59-amino modifier conjugated to the 59-phosphate group of ODNs, i.e. H 2 N(CH 2 ) 6 -. The PCR amplification started from 98°C for 30 s, followed by 35 cycles of amplification, with each cycle consisting of 98°C for 10 s, 65°C for 30 s, and 72°C for 15 s, and then with a final extension at 72°C for 5 min, ending at 4°C. The PCR products were purified by Cycle Pure kit (Omega). The PCR products (100 ng) were digested with BbsI-HF restriction endonuclease (10 units) and recombinant shrimp alkaline phosphatase (rSAP, 10 units) in 10 ml 1 3 CutSmart buffer (New England Biolabs) at 37°C for 25 min, followed by deactivation of rSAP at 80°C for 10 min. To the above mixture were added 5 mM DTT, 1.66 pmol [g-32 P]ATP, 10 units T4 polynucleotide kinase (T4 PNK), CutSmart buffer, and water to give a total volume of 15 ml. The mixture was incubated at 37°C for 30 min, and T4 PNK was then deactivated by heating the solution at 70°C for 10 min. The resulting mixture was further digested with 10 units MluCI at 37°C for 25 min and subsequently quenched by adding 15 ml formamide gel-loading buffer containing xylene cyanol FF and bromphenol blue dyes. The radiolabeled digestion mixtures were resolved using a 30% native polyacrylamide gel (19:1 acrylamide:bis) and the intensities for the radiolabeled gel bands were measured by using a Typhoon 9410 imager. The aforementioned digestion procedures yield a 10-mer duplex: 59-p*GGCMNGCTAT-39/59-AATTATAGCY-39 for full-length replication products of the lesion-containing plasmid, with M and N being the nucleobases at the dinucleotides initially flanking the alkyl-PTE lesions, Y being the complementary base of N in the opposite strand, and p* being the radiolabeled phosphate. The corresponding digestion of the PCR product for the competition genome yielded a 13-mer duplex: 59-p*GGCGATAAGCTAT-39/59-AATTATAGCT-TAT-39. The bypass efficiency was calculated by: Bypass effi-ciency (%) = (lesion signal/competitor signal)/(control signal/ competitor signal) 3 100%. Identification and quantification of mutation frequencies by MS Approximately 3 mg PCR products in 250 ml of 1 3 CutSmart buffer was mixed with 50 units BbsI-HF, 20 units rSAP, and incubated at 37°C for 2 h, followed by deactivation of rSAP at 80°C for 20 min MluCI (20 units) was subsequently added to the mixture and the digestion was continued at 37°C for 1 h. The resulting mixture was extracted once with phenol/chloroform/isoamyl alcohol (25:24:1, v/v) and the aqueous phase was evaporated, desalted by Waters Oasis HLB extraction cartridges (Milford, MA, USA), and redissolved in 10 ml water. A 5-ml aliquot was analyzed by LC-MS/MS on an LTQ linear ion trap mass spectrometer (Thermo Electron, San Jose, CA, USA) with an Agilent Zorbax SB-C18 column (0.5 3 150 mm, 5 mM in particle size). The gradient was 5 min of 5-20% methanol followed by 35 min of 20-50% methanol in 400 mM HFIP (pH was adjusted to 7.0 with triethylamine). The temperature for the ion transfer tube was 300°C, and the mass spectrometer was set up for acquiring the higher-resolution ultra-zoom scan MS and full-scan MS/MS for the [M-3H] 32 ions of 10-mer ODNs, d (GGCMNGCTAT), with M and N being A, T, C, and G. Data availability All data are contained within the manuscript.
The hair-trigger effect for a class of nonlocal nonlinear equations We prove the hair-trigger effect for a class of nonlocal nonlinear evolution equations on $\mathbb{R}^d$ which have only two constant stationary solutions, $0$ and $\theta>0$. The effect consists in that the solution with an initial condition non identical to zero converges (when time goes to $\infty$) to $\theta$ locally uniformly in $\mathbb{R}^d$. We find also sufficient conditions for existence, uniqueness and comparison principle in the considered equations. Introduction We will deal with the following nonlinear nonlocal evolution equation on the Euclidean space R d , d ≥ 1: ∂u ∂t (x, t) = κ(a * u)(x, t) − mu(x, t) − u(x, t)(Gu)(x, t) (1.1) for t > 0, x ∈ R d , with an initial condition u(x, 0) = u 0 (x), x ∈ R d . Here m, κ > 0; a is a nonnegative probability kernel on R d , i.e. 0 ≤ a ∈ L 1 (R d ) and and G is a mapping on a space of bounded on R d functions. We interpret u(x, t) as a density of a population at the point x ∈ R d at the moment of time t ≥ 0. The probability kernel a = a(x) describes distribution of the birth of new individuals with constant intensity κ > 0. Individuals in the population may also die either with the constant mortality rate m > 0 or because of the competition, described by the density dependent rate Gu, where G is an (in general, also nonlinear) operator on a space of bounded functions (cf. the discussion in [53]). The equation (1.1) can be also rewritten in a reaction-diffusion form ∂u ∂t (x, t) = κ(a * u)(x, t) − κu(x, t) + (F u)(x, t), (1.4) where F u := u(κ − m − Gu) (1.5) plays the role of the so-called reaction term, whereas Lu := κ(a * u) − κu (1.6) describes the non-local diffusion generator, see e.g. [4] (note that L is also known as the generator of a continuous time random walk in R d or of a compound Poisson process on R d ). As a result, the solution u to the equation (1.4) may be interpreted as a density of a species which invades according to a nonlocal diffusion within the space R d meeting a reaction F ; see e.g. [28,49,55]. Below, we restrict ourselves to the case where (1.1) has two constant solutions u ≡ 0 and u ≡ θ > 0 only. The main aim of the present paper is to find sufficient conditions for the so-called hair-trigger effect. The latter means that, unless u 0 ≡ 0, the corresponding solution to (1.1) achieves an arbitrary chosen level between 0 and θ uniformly on an arbitrary chosen domain of R d after a finite time. In other words, u(x, t) converges, as t → ∞, locally uniformly in x ∈ R d to the positive stationary solution u ≡ θ. The latter constant solution, therefore, is globally asymptotically stable in the sense of the topology of local uniform convergence. Therefore, the equation (1.1) appears of the so-called monostable type; cf. also Remark 5.5 below. Firstly, a reaction-diffusion equation of the form (1.4) was considered in the seminal paper [44] by Kolmogorov-Petrovsky-Piskunov (KPP). There, for the local reaction F u = f (u) = u(1−u) 2 (that corresponds to Gu = 2u−u 2 in (1.5); we set also here κ − m = 1), the equation (1.4) was derived from a model for the dispersion of a spatially distributed species. To analyze the model, the authors used a diffusion scaling, which led to the classical local diffusion generator κ∆u (for d = 1) instead of L in (1.4). Moreover, they proposed the method which covered more general local reactions F u = f (u) as well. We will say that such local reaction F has the KPP-type if f : R → R is Lipschitz continuous on [0, θ] and In particular, the logistic reaction f (u) = u(θ − u), that corresponds to the identical mapping Gu = u in (1.5), satisfies (1.7). The corresponding model was considered early by Fisher [34], it described the advance of a favorable allele through a spatially distributed population. Note that the conditions for the mapping G (and hence, by product, for the reaction F ) which we postulate in Section 2 below are reduced, in the case of a local reaction F u = f (u), to (1.7) (see Example 1 below). Later, the significance of nonlocal terms in diffusion and/or reaction in (1.4) was stressed by many authors, in particular, in ecology and population biology, see e.g. [14,16,45]; see also recent papers [8,50] where the importance and observed effects of nonlocal interactions in biological models are discussed. A natural nonlocal analogue of the Fisher-KPP equation with the mentioned local reaction f (u) = u(θ − u) is the equation (1.4) with both nonlocal diffusion generator (1.6) and the linear nonlocal mapping Gu = κ − a − * u in (1.5), where κ − > 0, 0 ≤ a − ∈ L 1 (R d ) with R d a − (x) dx = 1, and the convolution is defined as in (1.3) (see Example 2 below). The corresponding equations (1.1), or (1.4), similarly to the classical Fisher-KPP equation, may be obtained from different models. In particular, for the case κ = κ − , a = a − , it was obtained, for m = 0 in [47,48] from a model of simple epidemic, whereas, for m > 0, it was derived in [27] from a crabgrass model on the lattice Z d . For different kernels a and a − , the equation (1.1) appeared in [11] from a population ecology model; see also [12,23] and the rigorous derivation of (1.1) in [29,35] More generally, a nonlocal analogue of the local KKP-type reaction f (u) = u(θ − u) n is, naturally, the reaction F u = γ n u(θ − a − * u) n , n ∈ N, (1.8) with a − is as above and γ n > 0 (see Example 3 below). Note also that the equation (1.4) with the nonlocal diffusion (1.6) and a local KPP-type reaction F u = f (u) was considered in [54] motivated by an analogy to Kendall's epidemic model [43]. The first (up to our knowledge) result about the hair-trigger effect described above, for a non-linear evolution equation with the local diffusion, was shown by Kanel [42], for the cases of the combustion and the Fisher-KPP reactiondiffusion equations in the dimension d = 1. Multidimensional analogues were shown by Aronson and Weinberger [6,7]; in the latter reference the notion 'hairtrigger' was, probably, firstly used. For the nonlocal diffusion (1.6), the first result about the hair-trigger effect for a solution to (1.4) was obtained in [46]: for the one-dimensional case d = 1, under additional restrictions on the probability kernel a = a(x), and for a local reaction F u = f (u) of the KPP-type given by (1.7). For the nonlocal diffusion in R d with d > 1, the hair-trigger effect, for the local reaction term f (u) = u 1+p (1−u) with p > 0, has been shown recently in [2], under additional assumptions on a = a(x) (in particular, its radial symmetry was assumed). From this, by comparison-type arguments, it might be possible to show the hair-trigger effect for a local KPP-type reaction F u = f (u) described by (1.7), provided that, additionally, f (θ) < 0. To the best of our knowledge, the present paper is the first one that shows the hair-trigger effect for non-local reactions. In particular, we allow the reaction (1.8) in (1.4)-(1.5), provided that an appropriate comparison between a and a − is assumed (see Examples 2-3 below). Another novelty of the present paper, even for the case of the local KPPtype reactions F u = f (u) given by (1.7) is that we allow general anisotropic probability kernels a = a(x), x ∈ R d (see Example 1 below). Note, that, however, we do not cover the local reaction f (u) = u 1+p (1 − u) with p > 0, considered in [2]. For results about the hair-trigger effect in other types of non-local equations see also [24]. Assumptions and main results Recall, that we treat u = u(x, t) as the local density of a system at the point x ∈ R d and at the moment of time t ∈ R + := [0, ∞). We assume that the initial condition u 0 to (1.1) is a bounded function on R d . Namely, we will consider the following Banach spaces of real-valued functions on R d : the space C b (R d ) of bounded continuous functions on R d with supnorm, the space C ub (R d ) of bounded uniformly continuous functions on R d with sup-norm, and the space L ∞ (R d ) of essentially bounded (with respect to the Lebesgue measure) functions on R d with esssup-norm. Let E be either of the spaces with the corresponding norm denoting by · E . For an interval I ⊂ R + , let C(I → E) and C 1 (I → E) denote the sets of all continuous and, respectively continuously differentiable, E-valued functions on I. Definition 2.1. Let I be either a finite interval [0, T ], for some T > 0, or the whole R + := [0, ∞). A function u ∈ U I := C(I → E) ∩ C 1 ((I \ {0}) → E) which satisfies (1.1) and such that u(·, 0) = u 0 (·) in E is said to be a classical solution to (1.1) on I. For brevity, we denote also Here and below, for the case E = L ∞ (R d ), we will treat the latter inclusion a.e. only. Set also, for an r > 0, We denote by T y : E → E, y ∈ R d , the translation operator, given by A sequence of functions (v n ) n∈N ⊂ E is said to be convergent to a function v ∈ E locally uniformly if (v n ) n∈N converges to v uniformly on all compact subsets of R d . We denote this by v n loc = = ⇒ v, n → ∞, Let also B r (x 0 ) denote the ball in R d with the radius r > 0 centered at the x 0 ∈ R d . In the case x 0 = 0 ∈ R d , we will just write B r := B r (0). In Section 3, we prove an existence and uniqueness result for a more general equation than (1.1); it can be read in the case of (1.1) as follows Then, for any T > 0 and 0 ≤ u 0 ∈ E, there exists a unique nonnegative classical solution u to (1.1) on [0, T ]. In particular, u ∈ U ∞ . To exclude the trivial case when u(·, t) E converges to 0 uniformly in time, we assume that We suppose that there exist two constant solutions u ≡ 0 and u ≡ θ > 0 to (1.1), more precisely, there exists θ > 0 such that We will also assume that G is (locally) Lipschitz continuous in E + θ , namely, there exists l θ > 0, such that We restrict ourselves to the case when the comparison principle for (1.1) holds. Namely, we assume that the right-hand side of (1.1) is a (quasi-)monotone operator: In Section 4, we also prove that the comparison principle holds for a more general equation than (1.1); in the case of (1.1) it gives the following result. Then, for all t ∈ [0, T ], x ∈ R d , 2. Let u ∈ U ∞ be a classical solution to (1.1), given by Theorem 2.2, such that 0 ≤ u 0 ≤ θ. Then, for all t ∈ R + , x ∈ R d , In particular, combining two previous parts, we get the following statement. Let functions We assume next that the kernel a is not degenerate at the origin, namely, Stability of the solution to (1.1) with respect to the initial condition in the topology of locally uniform convergence requires continuity of G in this topology: We will consider the translation invariant case only: let T y , y ∈ R d , be a translation operator, given by (2.2), then Under (A7), for any r ≡ const ∈ (0, θ), Gr ≡ const. In this case, we assume also that In Section 5, we prove the hair-trigger effect for the solutions to (1.1). For technical reasons, it will be done separately for kernels with and without the first moment. Namely, for the kernels which satisfy the condition we set and assume, additionally to (A4), that Remark 2.4. We are going to formulate now our main results about the hairtrigger effect for a solution to (1.1). It requires that the initial condition to (1.1) is not degenerate: if E is a space of continuous functions, this means that u 0 is not identically equal to zero, u 0 ≡ 0. For a brevity of notations, in the case E = L ∞ (R d ), we will treat u 0 ≡ 0 as follows: there exists δ > 0 and x 0 ∈ R d , such that u 0 (x) ≥ δ for a.a. x ∈ B δ (x 0 ). Then we can formulate the following Theorem 2.5. Let the conditions (A1)-(A10) hold. Let u 0 ∈ E + θ , u 0 ≡ 0 (cf. Remark 2.4), and let u be the corresponding solution to (1.1). Then, for m defined by (2.4) and any compact set K ⊂ R d , (2.5) Remark 2.6. Note that the correction term tm = tκ R d ya(y)dy in (2.5) equals to the expected value of the compound Poisson process with the probability density a and the intensity κ. An evident example of a probability kernel with an infinite first moment is the density a(x) = c(1 + |x| 2 ) − 1+d 2 , x ∈ R d of the multivariate Cauchy distribution; here |·| denotes the Euclidean norm in R d , and c is the normalizing factor to ensure (1.2). To include this and other cases, for the kernels which do not satisfy (A9), we consider the following assumption: which satisfy (A1)-(A10) instead of a, κ, G, θ, correspondingly, such that κ n a n * w − wG n w ≤ κa * w − wGw, w ∈ E + θn . (A11) Then the following counterpart of Theorem 2.5 holds. Theorem 2.7. Let the condition (A11) hold. Let u 0 ∈ E + θ , u 0 ≡ 0 (cf. Remark 2.4), and let u be the corresponding solution to (1.1). Then, for any compact set K ⊂ R d and for any n ∈ N, In particular, if m n = m ∈ R for all n ≥ n 0 ∈ N, then In particular, if (A1)-(A10) hold and m = 0 ∈ R d or if (A11) holds and m n = 0 ∈ R d for all n ≥ n 0 ∈ N, then one gets the desired hair-trigger effect described above. Remark 2.8. Note that, indeed, for a properly 'slanted' anisotropic kernel a with m = 0 ∈ R d , the solution to (1.1) may converge to 0 uniformly on any ball centered at the origin, whereas it will converge to θ on the 'time-moving' ball according to Theorems 2.5 or 2.7; see [30] for the corresponding result in the case of the Example 2 described below. Existence and uniqueness In this Section, we will show the existence and uniqueness of non-negative solutions to a generalized version of (1. .1) is bounded on E and G is continuous, this solution will be the classical one. Moreover, if t max < ∞, then, with necessity, u(·, t) E → ∞, as t t max . However, given u 0 ≥ 0, the general theory does not ensure that (A2) and the conditions of Theorem 2.2). Indeed, Duhamel's principle would imply then that , and hence u(·, t) E remains bounded on any finite time interval. 2) Another sufficient condition that would guarantee t max = ∞ is, therefore, the a priori global boundedness of u. In the case of the 'local' operator G, corresponding to the local reaction F u = f (u) in (1.4) (cf. Example 1), the global boundedness will follow from the comparison arguments considered in the Section 4 below (cf. Theorem 2.3). However, the case of a nonlocal operator G, and hence a nonlocal reaction F , would require a restrictive assumption (A4) for comparison. Moreover, one can modify the example in [40, pp. 2738-2739] to show that, in general, a solution to (1.1) does not need to be globally bounded on R + . 3) Note also, that any globally Lipschitz reaction F (and hence globally Lipschitz product u Gu) would lead to t max = ∞ (see e.g. [ To avoid aforementioned additional assumptions for the non-local case of G and F , we consider here a direct proof of the existence and uniqueness of non-negative solutions to (a generalized version of) the equation (1.1). Our proof uses standard fixed point-arguments to get existence and uniqueness on consecutive time intervals [Υ j , Υ j+1 ], j ≥ 0, Υ 0 = 0. Then, using Lemma 3.2 below, we will show that j≥0 (Υ j+1 − Υ j ) = ∞ that implies the existence and uniqueness on an arbitrary time-interval. Lemma 3.2. Let {r n } n∈N be a sequence of numbers, such that r 1 > 0 and the following recurrence relation holds where p, q > 0. Then the series n∈N 1 r n e qrn is divergent. Proof. By (3.1), r n , n ∈ N is a positive increasing sequence. Passing to the limit in (3.1) when n → ∞, one gets that r n → ∞, as n → ∞. Hence, without loss of generality, one can assume that b n := e −qrn < (pq) −1 , n ∈ N. One can rewrite then (3.1) as follows: b n+1 = b n e −pqbn . It is straightforward to check that Therefore, if we set c 1 := b 1 and c n+1 := cn 1+pq(e−1)cn , n ∈ N, we get c n ≤ b n , n ∈ N. On the other hand, 1 cn+1 = 1 cn + pq(e − 1), that leads to The statement is proved. For simplicity of notation, we denote also We are ready to prove now the existence and uniqueness result. Then, for any T > 0 and 0 ≤ u 0 ∈ E, there exists a unique nonnegative classical solution u ∈ U T (cf. Definition 2.1) to the equation Proof. First, we note that, by (3.4), We where 0 ≤ u τ ∈ E, τ > 0, and u 0 is the same as in (3.6). By assumptions on A and G, we have that Av, Gv In the right-hand side of (3.8), there is a time-dependent linear bounded operator (acting in u) in the space E whose coefficients are continuous on [τ, T ]. Therefore, there exists a unique solution to (3.8) where we used (3.7) and the notation (3.3). Therefore, For any T 2 > T 1 ≥ 0 and r > 0, we define Let now 0 ≤ τ < Υ ≤ T , and take any v, w ∈ X + τ,Υ (r). By (3.9), one has, for any where (3.14) Since |e −a − e −b | ≤ |a − b|, for any constants a, b ≥ 0, one has, by (3.10), (3.14), Next, for any constants a, b, p, q ≥ 0, therefore, by (3.10), (3.14), as re −r ≤ e −1 , r ≥ 0. Take any µ ≥ u τ E . By (3.11)-(3.16), one has, Therefore, Φ τ will be a contraction mapping on the set X + τ,Υ (r) if only Take for α ∈ (0, 1), Then, the second inequality in (3.17) holds, since e κr is increasing, namely, Next In order to satisfy the second inequality in (3.17) it is sufficient to check, but re κµ = µe κµ + αme, i.e. we need Choose α ∈ (0, 1), such that αme 2κ < 1 − α, and then choose µ > 0 large enough to ensure (3.19). As a result, one gets that Φ τ will be a contraction on the set X + τ,Υ (r) with Υ and r given by (3.18); the latter set naturally forms a complete metric space. Therefore, there exists a unique u ∈ X + τ,Υ (r) such that Φ τ u = u. This u will be a solution to (3.6) on [τ, Υ]. To fulfill the proof of the statement, one can do the following. Set τ := 0, Iterating this scheme, take sequentially, for each n ∈ N, τ := Υ n , x ∈ R d , Since r n > r n−1 and e κr is increasing, the same α as before will satisfy (3.19) with µ = r n as well. Then, one gets a solution u to (3.6) on [Υ n , Υ n+1 ] with initial condition u Υn , where and u Υn,Υn+1 ≤ r n + αme 1−κrn = r n+1 . As a result, we will have a solution u to (3.6) on intervals [0, , the right-hand side of (3.6), will be continuous on each of constructed time-intervals, therefore, one has that u is continuously differentiable on (0, Υ n+1 ] and solves (1.1) there. By (3.20) and Lemma 3.2, therefore, one has a solution to (3.6) on any [0, T ], T > 0. To prove uniqueness, suppose that Since {r n } n≥0 above is an increasing sequence, v will belong to each of sets X + Υn,Υn+1 (r n+1 ), n ≥ 0, Υ 0 := 0, considered above. Then, being solution to (3.6) on each [Υ n , Υ n+1 ], v will be a fixed point for Φ Υn . By the uniqueness of such a point, v coincides with u on each [Υ n , Υ n+1 ] and, thus, on the whole [0, T ]. . Thus u is a classical solution to (1.1). The proof is fulfilled. Comparison principle The comparison principle is a standard tool in studying parabolic-and elliptictype equations, see e.g. [21,37]. For instance, it allows to estimate an unknown solution, constructing explicit sub-and super-solutions [5][6][7]. See also [17,18,54] for comparison results and its applications in studying traveling waves for nonlocal equations. To the best of our knowledge, the first detailed proof of the comparison principle for the parabolic equation in the case of nonlocal diffusion (1.6) in (1.4), was done by Yagisita [58] in the case of globally Lipschitz KPPtype reaction F u = f (u) (see also [46,Lemma D.1]). The comparison principle is often used in other articles without any reference on the proof. Also we do not know any result on the comparison principle in the case of a non-local reaction. We will get in Theorem 4.2 the comparison principle related to an abstract evolution equation where H : E → E is locally Lipschitz continuous and such that the operator H + p is monotone on E for some p > 0. Here and below we use the same notation for a constant and for the operator of multiplication by this constant in the space E. Let H : E → E. For any u ∈ U T , cf. (2.1), and r > 0, we define Here and below we consider the left derivative at t = T only. 2) Hv Let T > 0 be fixed. Suppose that u 1 , u 2 ∈ U T are such that Proof. Define, cf. (4.5), the following function for (x, t) ∈ R d × [0, T ]. For a constant K > 0, which will be specified later, consider the mapping We have, for w ≥ 0, Since, for any x ≥ y ≥ 0, z ≥ 0, if only K ≥ p that we will assume in the following. Next, applying (4.2) to (4.8), we will get that w Therefore, since u 1 , u 2 ∈ U T implies, by (4.1), (4.7), that Define also the function Clearly, v ∈ U T , and it is straightforward to check that Therefore, v solves the following integral equation in E: where v(x, 0) ≥ 0 by (4.6). If we take T < T such that the following inequality holds Choose an arbitrary extension of G on {0 ≤ v ∈ E} such that (3.5) holds. By Theorem 2.2, there exists a unique classical solution u to (1.1). Hence 0 ≤ u = u ≤ θ. The proof is fulfilled. 5 The hair-trigger effect: proofs of Theorems 2.5, 2.7 We are going to prove our main Theorems 2.5 and 2.7. The Section is organized as follows. First, in Propositions 5.1-5.2, we show some properties of solutions to (1.1) with continuous initial conditions. Note that, by existence and uniqueness Theorem 2.2, the solutions will be also continuous and, moreover, by comparison Theorem 2.3, any solution in E = L ∞ (R d ) can be estimated from above and below by continuous ones taking the corresponding estimates for the initial condition u 0 ≡ 0, cf. Remark 2.4. Next, we describe general Weinberger's scheme [57] for a dynamical system in discrete time in the context of the equation (1.1) (Propositions 5.4 and 5.7, Lemma 5.8), and prove the corresponding result for continuous time (Proposition 5.11). The latter result is proved under additional assumptions inherited by general Weinberger's approach: a technical assumption (5.17) on the dynamical system and an assumption (5.18) on the initial condition u 0 , which cannot be verified for particular examples of u 0 , cf. Remark 5.9. Then, in Proposition 5.13, by using Lemma 5.12, we prove that the technical assumption (5.17) holds. To get rid of restrictions on initial condition u 0 , one needs more machinery. Namely, we find in Proposition 5.14 a useful sub-solution to the linearization of the equation (1.1) around the zero solution. Next, we show that (being multiplied on a small enough constant) it will be a sub-solution to the nonlinear equation (1.1) as well (Proposition 5.15) and, in Proposition 5.16, we show that a solution to (1.1) becomes larger than the sub-solution after a big enough time. As a result, one can show that Weinberger's assumption (5.18) on the initial condition is fulfilled (just starting from a moment of time t 0 > 0 rather than from 0). Finally, in the proof of Theorem 2.7, we show how to deal with the kernels without the first moment (where the assumption (A9) fails). Proof. Being classical solution to (1.1), u satisfies the integral equation Hence for any x, y ∈ R d , 0 ≤ τ < t, one has that fulfills the proof of the first inclusion. Then, the second one follows from the inequality u(·, t) E − u(·, τ ) E ≤ u(·, t) − u(·, τ ) E , t, τ ∈ R + . Finally, if the conditions (A1)-(A4) hold, then, by Proposition 4.3, one gets that the solution u exists and satisfies the conditions above if only C := θ. Moreover, (A3) implies that, for any v ∈ E + θ , that fulfills the proof. The maximum principle is a 'standard counterpart' of the comparison principle, see e.g. [17]. We will use in the sequel that, under some additional assumptions, the solutions to (1.1) are strictly positive; this is a quite common feature of linear parabolic equations, however, in general, it may fail for nonlinear ones. Consider the corresponding statement. In the sequel, it will be useful to consider the solution to (1.1) as a nonlinear transformation of the initial condition. where u(x, t) is the solution to (1.1) with the initial condition u(x, 0) = f (x). Let us collect several properties of Q t needed below. Proof. Let u 0 be decreasing along a ξ ∈ S d−1 . Take any s 1 ≤ s 2 and consider two initial conditions to (1.1): u i 0 (x) = u 0 (x + s i ξ) = (T −siξ u 0 )(x), i = 1, 2 (cf. (2.2)). Since u 0 is decreasing, u 1 0 (x) ≥ u 2 0 (x), x ∈ R d . Then, by Proposition 5.4, that proves the statement. The cases of a decreasing u 0 can be considered in the same way. The constant function along a vector is decreasing and decreasing simultaneously. To prove the hair-trigger effect (Theorems 2.5, 2.7), we will follow the abstract scheme proposed in [57] for a dynamical system in discrete time. Note that all statements there were considered in the space Consider the set N θ of all non-increasing functions ϕ ∈ C(R), such that ϕ(s) = 0, s ≥ 0, and For arbitrary s ∈ R, c ∈ R, ξ ∈ S d−1 , define the following mapping V s,c,ξ : Fix an arbitrary ϕ ∈ N θ . For t > 0, c ∈ R, ξ ∈ S d−1 , consider the mapping R t,c,ξ : L ∞ (R) → L ∞ (R), given by where Q t : E → E is a mapping which satisfies the conditions (Q1)-(Q5) in Proposition 5.4 (in particular, one can consider Q t given by (5.7) provided that (A1)-(A8) hold). Consider now the following sequence of functions Next, for any t > 0, ξ ∈ S d−1 , we define where, as usual, sup ∅ := −∞. By [57, Propositions 5.1, 5.2], one has cf. also [57,Lemma 5.5]; moreover, c * t (ξ) is a lower semi-continuous function of ξ. It is crucial that, by [57,Lemma 5.4], neither f t,c,ξ (∞) nor c * t (ξ) depends on the choice of ϕ ∈ N θ . Note that the monotonicity of f t,c,ξ (s) in s and (5.12) imply that, for c < c * t (ξ), f t,c,ξ (s) = θ, s ∈ R. Define We will need the following Weinberger's result: and v 0 ∈ E + θ . Let, for some fixed t > 0, Q = Q t : E → E be a mapping which satisfies the conditions (Q1)-(Q5) in Proposition 5.4, and Υ t be defined by (5.13). Suppose that int(Υ t ) = ∅. (5.14) Then, for any compact set C t ⊂ int(Υ t ) and for any σ ∈ (0, θ), one can choose a radius r σ = r σ (Q t , C t ) > 0, such that, for any fixed Remark 5.9. Note that, in [57, Theorem 6.2], the existence of r σ is proved only; there are not any estimates on it. As a result, for a given v 0 ∈ E + θ , the condition (5.15) cannot be checked directly. Remark 5.10. There is no loss of generality if we assume that (5.15) holds for , such that, for all n ≥ N , one gets x 0 + nC t ⊂ n C t . Therefore, we have The following statement presents a counterpart of Lemma 5.8 for continuous time provided that the mapping Q t is given by the solution to (1.1) as in (5.7). Then n m ≥ N j and, for q m := S j−1 + n m j , we easily get that max{1, |n|} |t m − q m | < δ. Therefore, that contradicts (5.24). Therefore (5.19) holds and the proof is fulfilled. We are going now to get rid of the assumptions (5.17) and (5.18) in Proposition 5.11. We start with the following lemma. and let v ∈ L ∞ (R → R + ) be a non-increasing function. Then the following limit holds Proof. For r > 0 and := r 2 , we have, by Fubini's theorem, Clearly, Next, because of the monotonicity of v, we have, Since −r − y < − r 2 for |y| ≤ = r 2 , we have that and, similarly, 1 1 |y|≤ r r−y v(s)ds → v(∞)y. Therefore, by the dominated convergence theorem, On the other hand, as r → ∞. Combining (5.27) and (5.28), one gets the statement. The following statement yields sufficient conditions for (5.17). Let f t,c,ξ be defined by (5.11). By the definition of Υ t and (5.12), we have that if f t,c,ξ (∞) = θ for all ξ ∈ S d−1 , then (5.29) holds. Suppose, in contrast, that, for some ξ ∈ S d−1 , f t,c,ξ (∞) = 0. Fix such a ξ, consider the corresponding c according to (5.30), and denote f := f t,c,ξ . Note that, by [57,Lemma 5.2] and the discussion thereafter, f (−∞) = θ. We set u 0 (x) := f (x · ξ), x ∈ R d , and consider the corresponding solution u to (1.1). Then, by (5.8), we evidently have Next, as it was mentioned above, the functions f n and f = f t,c,ξ in (5.11) are monotone, hence the limit in (5.11) is locally uniform. Therefore, passing n to ∞ in (5.10), we will get from (5.9) and Proposition 5. We find first a useful sub-solution to the linearization of (1.1) around the zero solution, namely Proposition 5.14. Let (A1), (A5), (A9) hold and m be given by (2.4). Then there exists α 0 > 0, such that, for all α ∈ (0, α 0 ), there exists T = T (α) > 0, such that, for all q > 0, the function is a sub-solution to (5.41) on t > T ; i.e., cf. (4.1), The proof is very similar to that in [30,Proposition 5.19]. For reader convenience, we provide the proof in the Appendix. Now, we will show that (5.42) is a sub-solution to (1.1) provided that q is small enough. Proposition 5. 16. Let (A1)-(A10) hold. Then, there exists t 1 > 0, such that, for any t > t 1 and for any τ > 0, there exists q 1 = q 1 (t, τ ) > 0, such that the following holds. If u 0 ∈ E + θ is such that there exist η > 0, r > 0, x 0 ∈ R d with u 0 (x) ≥ η, x ∈ B r (x 0 ) and u is the corresponding solution to (1.1), then The proof is, as a matter of fact, the same as that in [30,Proposition 5.20]. Again, for reader convenience, we provide the proof in the Appendix. Now we are finally ready to proof Theorems 2.5, 2.7. Proof of Theorem 2.5. As it was mentioned above, one can get the statement, combining Propositions 5.11 and 5.13, provided that (5.18) holds. To get rid of the latter assumption, one can literally follow the proof of [30,Theorem 5.10] using the results of Propositions 5.15 and 5.16. Proof of Theorem 2.7. Without loss of generality we can assume that θ−θ n ≤ θ 2 , n ∈ N. Let u n (x, 0) = v 0 (x) and u n solves the following equation θn u n := ∂u n ∂t − κ n a n * u n + u n G n u n + mu n = 0. Therefore by (A11) we obtain, Hence by Theorem 4.2 applied to F (n) θn , we obtain Applying Theorem 2.5 to the equation (5.44), we have that fulfills the proof.
Creep Rupture Behavior in Dissimilar Weldment between FB2 and 30Cr1Mo1V Heat-Resistant Steel School of Mechanical Engineering, Tsinghua University, Beijing 100084, China State Key Laboratory of Long-Life High Temperature Materials, Dongfang Turbine Co., Ltd, Deyang 618000, China Manufacturing Technology Department, Dongfang Turbine Co., Ltd, Deyang 618000, China School of Mechanical Engineering, Chengdu University, Chengdu 610106, China Institute of Light Alloy Materials, Leshan Normal University, Leshan 614000, China Introduction With the purpose of reducing carbon dioxide emissions which cause increasingly serious environmental pollution and meeting increasing demand on energy, research on renewable green energy is needed on the one hand; traditional ultrasupercritical (USC) generation technology with improved steam parameters like higher temperature and pressure has developed rapidly in recent decades on the other hand [1][2][3]. Dissimilar metal welding techniques have been gradually utilized to manufacture many important mechanical parts in industry, such as thermal power rotors in USC unit or large petrochemical pressure vessels like reactor and separator in petrochemical industry, since dissimilar welding applications have the advantages like unique property combinations, weight reductions, lower costs, and improved energy efficiency [4]. However, as known, it is challenging to produce excellent dissimilar welds due to different chemical and physical properties (melting point and coefficient of linear expansion, thermal conductivity, etc.) between different base metals [5,6]. Generally, the properties of dissimilar weld depend on the properties of the base materials, the selection of the welding methods and parameters, and the postweld heat treatment process; and the welds exhibit different mechanical property irregularities due to uneven changes in the microstructure [4]. A great amount of researches [6][7][8][9][10][11][12][13] have been implemented to assess the creep deformation and fracture behavior of dissimilar weld joints of creep-resistant steels. For instance, Dagmar [12] researched the precipitates quantitatively in weld joints of COST F and FB2 creep-resistant steels with conventional and accelerated creep tests, and the fracturing was located in HAZ of the F steel. Creep resistance of similar and dissimilar weld joints of P91 steel (a similar weld joint of 9Cr1Mo steel and a dissimilar weld joint of 9Cr1Mo and 2.25Cr1Mo steels) was studied in Ref. [8] with two experimental weld joints, finding out that for high temperature, the similar weldment ruptured in the HAZ of the parent material while the dissimilar weldment's rupture position was in the heat-affected zone (HAZ) of the weld metal. Evaluation of the creep behavior of 2.25Cr-1Mo/ 9Cr-1Mo dissimilar weld joint with its base and weld metals was performed by Ref. [10]. In this research, dissimilar weld joint of FB2 (13Cr9Mo1Co1NiVNbNB) and 30Cr1Mo1V heat-resistant steel was produced. Compared with other metals which may perform as functional material [14,15], heat-resistant steels have been commonly used as structure material in ultrasupercritical (USC) power plants owing to their creep and oxidation resistance and great hightemperature strength. Ferritic (9-12 wt.%) Cr steels [16] are recognized as the key materials for producing casings and forgings for turbines of USC units, and FB2 steels were developed under the European Cooperation in Science and Technology (COST) program [17] for large-scale forging production applying under creep temperature 620°C. 30Cr1Mo1V [18][19][20][21][22][23] is a type of turbine-rotor steel, which was designed in the 1990s to manufacture the highpressure or intermediate-pressure rotors in USC units, serving at a pressure between 23.5 and 25 MPa and a temperature range of 538 and 540°C. Though studies have been carried out to research long-term creep behavior of the heat-resistant steels and their weldments, the creep behavior of the dissimilar FB2 and 30Cr1Mo1V weldment still needs detailed research. Herein, in this paper, dissimilar welded pipe with FB2 and 30Cr1Mo1V forging pieces for the steam turbine rotor were manufactured in Dongfang Turbine Company in China, which satisfied all requirements set by technical standards. Creep testing at 783 K (510°C) under different stresses was carried out. Microhardness values were obtained along the crept welded joint. Optical microscopy (OM) and scanning electron microscopy (SEM) were applied to explore microstructure, fracture morphology, and precipitates in crept specimens. The main goal of this work is to investigate the creep stress sates under 783 K (510°C) on the creep rupture behavior of the dissimilar weld joint of FB2 and 30Cr1Mo1V. Materials and Methods The chemical compositions of the two base metals are displayed in Table 1. The heat treatment of forged FB2 consisted of one-time quenching and tempering twice, forming full lath martensite microstructure. The 30Cr1Mo1V forging first experienced a preheat treatment process including normalizing and the following tempering and then was quenched and tempered, to form bainite along with ferrite structure. The multilayer welding was carried out employing International Journal of Photoenergy tungsten inert gas arc welding (TIG-W) for backing weld and submerged arc welding (SA-W) for the subsequent multipass welding. Welding wire TG-S2CMH and US-521H were adopted for TIG and SA welding, respectively. After the welded circular pipe was produced, the postweld heat treatment (PWHT) was conducted to stabilize microstructure and relieve internal stress. Round tensile creep specimens were machined parallel to the axial direction of the welded pipe with two types of parent materials, and the welded fusion zone occupied a center position, and the design and dimensions of the creep specimens are shown in Figure 1. The uniaxial tensile creep tests were carried out employing a leveraged creep machine (CRIMS RD2-3) in the atmosphere, with high-temperature furnace to keep a high temperature of 783 K (510°C), under seven applied stress levels, 420, 400, 350, 320, 300, 270, or 260 MPa. In order to monitor and regulate the test temperature, two NiCr-NiSi thermocouples were attached to the specimen. Gage length displacement and cumulative creep strain were measured with double linear variable displacement transducer (LVDT) extensometers, whose resolution was 0.001 mm. The gage length and cross-sectional area of each specimen were measured before and after creep testing to calculate the percent elongation and the percent reduction in area. Microhardness values were obtained along the crept welded joint. Optical microscopy (OM) and scanning electron microscopy (SEM) equipped with energydispersive spectrometer (EDS) were applied to explore microstructure, fracture morphology, and precipitates in crept specimens. Results and Discussion 3.1. Fractured Specimens. Photos of welded specimens between FB2 and 30Cr1Mo1V after creep tests at 783 K under different stresses from 420 to 260 MPa are displayed in Figure 2(a). All of the investigated samples fractured in the weld fusion zone. The location of fracture of dissimilar weldments depends not only on the parent material but also on the creep test conditions. For example, the fracture of the weldment of FB2 and COST F creep-resistant steels was in the heat-affected zone (HAZ) of the F steel in Ref. [12]. Moreover, according to Ref. [8] which studied the P91/P22 weld joint, the fracture was initiated in P22 parent material at relatively low temperatures and high stresses while fracture occurred in the weld metal at higher temperatures and lower stresses. Based on the research of Ref. [24] on dissimilar steel welded joints between ferritic steel and austenitic stainless steel (T92/HR3C), the rupture positions vary with change in stresses, from T92 base material to HAZ of T92 and to weld seam with decreasing stress. A certain amount of localized plastic deformation, i.e., the necking, after various creep damages can be observed in samples under stresses no less than 270 MPa, with the measured percentage elongation ranging from 5.52% to 7.19% and the percentage reduction of area from 55.73 to 73.19%. The detailed percentage elongation and reduction of area values under various creep conditions are displayed in Figure 2(b). The 3 International Journal of Photoenergy creep ductility exhibits little change between 420 and 300 MPa, and it drops dramatically at 270 MPa. The rupture time exhibits an increasing trend with the drop of applied stress, and the welded specimen does not break till the target time of 10,000 hours for the lowest stress of 260 MPa. Roomtemperature and high-temperature tensile testing has been implemented on welded joints, and just like creep testing, all specimens ruptured in the weld metal, and the strength decreases significantly while the ductility values increase slightly with testing temperature. Under room temperature, the yield strength, tensile strength, elongation, and reduction of the area are 692 MPa, 617 MPa, 13%, and 72%, and they change to 463 MPa, 430 MPa, 14%, and 78% for 823 K (550°C), respectively. It is worth noting that the specimens under 400 and 320 MPa deviate from the above rupture time increasing with the drop of the stress trend, which is probably as a result of the welding defects. Figure 3 displays the SEM fractograph of the specimen at 320 MPa, and secondary cracks and inclusions can be observed. Besides, small local ductile fracture regions which show dimpled fracture surfaces spread out across the fracture surface, implying that the fraction in this specimen is not pure ductile fraction. Creep Rupture Exponent and Life Assessment. Variations between applied stress (σ) and creep rupture life (t r ) of the welded joint are logarithmically plotted in Figure 4 according to the equation t r = A•σ −n , where A is a coefficient and n is creep rupture exponent. The fitted creep rupture exponent is 14.53, with a coefficient of determination, R 2 , of 0.93477. Zhang et al. [25] have successfully utilized this law to predict the creep life of weldments between an austenite heat-resistant steel and a nickel base weld metal at a specified temperature. The extrapolating strength of rupture time at 10,000 h at 783 K (510°C) can be obtained as 261.67 MPa, which agrees with our experimental data (see Figure 2). In order to evaluate the combined influence of the exposed temperature and applied stress on the creep rupture of the weldment, the Larson-Miller parameter (LMP) with a form of P = TðC + lg t r Þ where C is the material constant, which is one of the three versions of time-temperature parametric (TTP) methods with a relative rate of success [26], is utilized to analyze the creep data. The calculated results are displayed in Figure 4. The extrapolating strength of rupture time at 783 K for 10,000 h is 259.02 MPa, with a LMP value of approximately 2:16 × 10 4 . Therefore, the 10,000 h extrapolating strength values predicted by the power law and LMP methods show good agreement. Figures 5 and 6, before and after creep testing. The OM image of parent material FB2 shows a typical microstructure of high Cr (9-12 wt.% Cr) ferritic steel; that is, prior austenite grains are divided into packets and further into blocks, and many elongated subgrains which contain a high density of free density can be observed in the blocks [27]. As for the 30Cr1Mo1V, it has a much smaller prior austenite grains compared to FB2, and a bainite and ferrite structure can be observed, agreeing with previous studies [21][22][23]. The heat-affected zone exhibits different microstructures including coarse-grain zone and fine-grain zone, whose evolution is mainly influenced by weld thermal cycle (the heating and cooling rate), peak temperature experienced, dwelling time, number of welding passes, and postweld heat treatment. As displayed in Figure 2, the fracture site is located at the weld fusion zone, and accordingly, the weld seam ( Figure 6) shows a complex microstructure of bainite, martensite, and coarse primary ferrite. Various crystallizing morphologies including fine equiaxed grains and coarse columnar crystals and dendrites can be observed in Figure 1(b) even with the unaided eyes. Such microstructure variation can be attributed to the effect of each pass on previous welding pass. The temperature gradient between weld and base metal leads to rapid solidification that results in the formation of columnar austenite grains. For multipass welding, the subsequent passes might cause autotempering of previously deposited metal, leading into the formation of fine former austenite grain structures in previously near deposited metal. It can be found in the weld seam of ruptured dissimilar weldment at 783 K (510°C) under 300 MPa ( Figure 6) that the equiaxed grain zone is increased while the columnar crystals and dendrites are decreased, perhaps because previously formed martensitic structure in the weld seam is not stable and has a tendency to degenerate into ferritic equiaxed crystals with low density of dislocation as a result of recovery and recrystallization during high-temperature creep. Creep Resistance Analysis in Weld Metal. Weldments have a highly heterogeneous microstructure; thus, the microhardness can vary considerably over relatively short distance. Figure 7 displays the hardness profile across the original weldment before creep testing and the specimens after testing at 783 K at an applied stress of 400, 300, or 270 MPa. Please note that the width of the weld fusion zone for crept specimens in the figure is smaller than its real dimension, because a section of the fractured specimen, approximately 10 mm, was cut off to observe the fracture surface. The practical widths of the weld metal for the crept specimen should be greater than those of the original weld joint, as a result of the plastic deformation in creep testing. It can be seen clearly that the base metal of FB2 exhibits greater microhardness values than 30Cr1Mo1V, whether before creep tests or after. The hardness in weld metal exhibits a decline trend with decreasing stress and increasing creep time while the hardness in other microzones of the weldment does not show such a sharp decline. Therefore, the weld fusion zone changes from the maximum hardness region to the minimum during creep testing, which is more obvious for the regions on the side near the fusion boundary, indicating that a softening phenomenon occurs in weld metal. This result is different from the results from previous [25,[28][29][30], where hardness of the FGHAZ/IC-HAZ is the lowest, and soft FGHAZ/IC-HAZ shows higher susceptibility to type IV cracking. There are mainly three strengthening mechanisms: solution strengthening, precipitation strengthening, and dislocation substructure for high Cr ferritic steel [27]. It is known that martensitic and bainitic transformation in the weld seam can introduce an extremely high density of dislocation, and these dislocations decrease rapidly with respect to the microstructural change during creep testing, resulting in the decrease in creep strength and microhardness. The evolution of dislocation density in X20 and P91 (two martensite ferritic steels), during heat treatment and creep, was analyzed using transmission electron microscopy (TEM) and X-ray diffraction (XRD) in Ref. [31], and according to their results on P91, the dislocation density is 4:20 × 10 14 m -2 after quenching and drops to 0:60 × 10 14 m -2 after one hour tempering, to 0:06 × 10 14 m -2 in crept specimen. Figures 8(a) and 8(b) show the TEM micrographs of elongated martens-itic lath structure in weld metal which does not experience stress; there exists a high density of dislocations within the laths and subgrains, which is mainly attributed to the formation of martensite during welding and subsequent heat treatment. The microstructural studies of specimen crept at 270 MPa, as shown in Figures 8(c) and 8(d), exhibit a decrease in the dislocation and precipitate density and an increase of the size of precipitates. Besides deterioration of the abovementioned strengthening mechanisms, Laves phase precipitation, (Fe, Cr) 2 (W, Mo), weakens the creep strength by removing Mo from solid solution. Accordingly, Laves precipitates along with Mn-and Cu-rich precipitates can be observed along grain boundaries based on Figure 9, which will be further discussed in the following section. Moreover, previous studies demonstrated that the hardness is affected not only by the dislocation structure but also by other microstructures such as substructures like microvoids and precipitates [29]. Figure 10 displays the evolution of microvoids with respect to creep stress. It can be found out that a great amount of microvoids form in 3.5. Fractography and Precipitate Analysis. Figure 11 displays creep microvoids and precipitates near the fracture surface. As displayed above, all of the investigated samples exhibit necking after various creep damages, which is a ductile fracture character. A tearing region can be clearly observed on each fracture surface, which is considered an eventual plastic deformation (tearing) of the fracture procedure. The SEM morphologies at the smooth region of the ruptured specimens exhibit transgranular fracture features which are characterized by dimples as a result of microvoid coalescence 7 International Journal of Photoenergy [32]. Equiaxed dimpled fracture surfaces can be observed in micrographs at high magnification, and with the drop of the applied stress, i.e., with the increase in creep time, large dimples emerge and thus, the dimples' average size increases. Creep cavities are found to be surrounded by the ductile dimples, and a large amount of the observed dimples are centered around near-spherical particles. In general, the nucleation of microvoids preferentially takes place at precipitates or inclusions, acting as a stress concentrator; then, the voids grow together to form a macroscopic flaw, resulting International Journal of Photoenergy into fracture. Herein, the fracture mechanism is the microvoid coalescence fracture, and the precipitates will create a stress concentration that can cause transgranular fracture initiation. EDS analysis of the precipitates in Figure 12 shows that the precipitate contains metal elements like Fe, Cr, and Mo. According to previous studies [12,33,34], the precipitates can be recognized as the Laves phase precipitates, which is an intermetallic phase of W and Mo recognized as (Fe, Cr) 2 (W, Mo). And W is not obvious in the above precipitate since no W is added into 30Cr1Mo1V and only 0.01 wt.% W is added to the FB2. About the formation mechanism of the Laves phase, some researchers believe that it swallows the M 23 C 6 carbides by nucleating and growing on it and forms a cluster around prior grain boundaries [35]. After long-term creep, with the nucleation and growth of the Laves phase, the Mo and W contents are decreased; meanwhile, the pinning effect of the M 23 C 6 is weakened, leading into the decrease in solid solution strengthening and creep strength, which will limit the life of component under high-temperature service condition; in addition, it is known that Laves precipitation can enhance the creep resistance by inhibiting the subgrain structure recovery and providing the pining effect to boundaries for short time creep, and such positive effect declines with creep time due to high coarsening rate [36]. Big-size Laves precipitates can act as cavity trigger, and precipitates larger than 130 nm changed the fracture mode from ductile to brittle according to the research of Panait et al. [37]. Besides the Laves phase, copper-rich precipitates containing Fe, Cu, Cr, and C can be observed in the vicinity of the fracture surface. Xiao et al. [38] discussed solute-dislocation interactions and creep-enhanced Cu-rich precipitate (CRPs) evolution in a novel ferritic-martensitic steel G115 during creep. And the large precipitates may nucleate microvoids which results into final fracture. Conclusions Dissimilar weldments between FB2 and 30Cr1Mo1V were obtained employing multipass submerged arc welding (SA-W) with backing weld being performed by tungsten inert gas arc welding (TIG-W). High-temperature creep tests at 783 K (510°C) were carried out for various applied stresses from 420 to 260 MPa. The creep rupture behavior of the dissimilar weldments are studied and summarized. All creep samples fractured with a certain amount of localized plastic deformation in the weld fusion zone, which exhibits a heterogeneous microstructure as a result of multipass welding. The specimen does not break for the target time of 10,000 h under the lowest stress of 260 MPa, and the extrapolating strength of rupture time at 10,000 h with power law is 262 MPa, and the value predicted by Larson-Miller parameter is 259 MPa, which agree well with the experimental result. The base metal of FB2 exhibits greater microhardness values than 30Cr1Mo1V, whether before creep tests or after. The hardness in weld metal shows a decline trend with decreasing stress and increasing creep time, so the weld fusion zone changes from the maximum hardness region to the minimum as a result of creep, indicating that a softening phenomenon occurs in weld metal. The rupture occurs in the weld metal, probably as a result of the decreased dislocation density, the appearance of the Figure 12: EDS spectrum of precipitates in the vicinity of the fracture surface in the specimen after 7386.5-hour creep testing. 9 International Journal of Photoenergy microvoids, and precipitates during creep. The SEM morphologies at the smooth region of the ruptured specimens exhibit transgranular fracture features, and equiaxed dimpled fracture surfaces can be observed. With the increase in creep time, large dimples emerge and thus, the dimples' average size grows. Creep cavities are found to be surrounded by the ductile dimples, and a large amount of the observed dimples are centered around near-spherical particles. The particles are recognized as the Laves phase and Cu-rich precipitates based on the EDS analysis. Microvoids preferentially nucleate at precipitates; then, the voids grow together to form a macroscopic flaw, which leads into eventual fracture. Data Availability The microstructure, hardness, and creep data used to support the findings of this study are included within the article. Conflicts of Interest The authors declare that there is no conflict of interest regarding the publication of this paper.
THz-TDS time-trace analysis for the extraction of material and metamaterial parameters We report on a method and an associated open source software, Fit@TDS, working on an average personal computer. The method is based on the fitting of a time-trace data of a terahertz time-domain-spectroscopy system enabling the retrieval of the refractive index of a dielectric sample and the resonance parameters of a metasurface (quality factor, absorption losses, etc.). The software includes commonly used methods where the refractive index is extracted from frequency domain data. However, these methods are limited, for instance in case of a high noise level or when an absorption peak saturates the absorption spectrum bringing the signal to the noise level. Our software allows to use a new method where the refractive indices are directly fitted from the time-trace. The idea is to model a material or a metamaterial through parametric physical models (Drude-Lorentz model and time-domain coupled mode theory) and to implement the subsequent refractive index in the propagation model to simulate the time-trace. Then, an optimization algorithm is used to retrieve the parameters of the model corresponding to the studied material/metamaterial. In this paper, we explain the method and test it on fictitious samples to probe the feasibility and reliability of the proposed model. Finally, we used Fit@TDS on real samples of high resistivity silicon, lactose and gold metasurface on quartz to show the capacity of our method I. INTRODUCTION The method of terahertz time-domain spectroscopy (THz-TDS), enabled by the progress in short-pulse lasers, begun rapid development about 30 years ago [1,2,3,4]. This wellestablished technique is now mature and has been commercialized by several companies. Today, THz-TDS is the main tool for broadband terahertz (THz) spectroscopy, offering more than one-decade of bandwidth with a standard resolution up to ~ 1 GHz. THz-TDS has shown the capability to study different materials such as semiconductors [5], ferroelectrics, superconductors, liquids [6], gases [7,8], "This work was partially supported by: i) the international chair of excellence "ThOTroV" from region "Hauts-de-France" ii) the welcome talent grant NeFiStoV from European metropole of Lille and iii) the French government through the National Research Agency (ANR) under program PIA EQUIPEX ExCELSiOR ANR 11-EQPX-0015." All Authors are with the Institut d'Electronique de Microélectronique et de Nanotechnologie (IEMN), CNRS, Univ. Lille, 59652 Villeneuve d'Ascq, France (corresponding author: romain.peretti@univ-lille.fr). As its name suggests, this spectroscopic technique involves measurements made in the time domain, and does not utilize dispersive elements as in commonly used frequency-domain methods. Furthermore, in contrast to Fourier-transform infrared spectrometers where the measured function is the autocorrelation of the time-domain data through interferometry, THz-TDS is based on a direct measurement of the electric field in the THz frequency range. The working principle of a common THz-TDS setup is depicted in FIG. 1. A THz pulse is emitted by a THz antenna or a nonlinear crystal by means of the optical rectification effect of a near-infrared pulse produced by a femtosecond laser. Next, a lens or a parabolic mirror is used to collimate the pulse and to direct it toward the sample under study. The transmitted (or reflected) pulse is then collected by an optical system and aligned onto a detector. The detector measures the electric field versus time by means of photoconductive or electro-optical sampling with a typical time sampling between 10 and 50 fs. The ability to measure directly the electric field of the THz pulse rather than the averaged energy gives access to both the phase and the amplitude of the waveform, and thus provides information on the absorption coefficient and the refractive index of the sample, or on the THz-TDS time-trace analysis for the extraction of material and metamaterial parameters dispersion [15] in the case of a photonic element like a waveguide. This makes the THz-TDS method a powerful tool for characterization of materials and photonic devices. For material analysis, the usual way to retrieve material parameters is to perform a Fourier transform of the recorded pulse time-traces with and without a sample. The ratio between these two spectra is called the complex transmission coefficient and can be written as [16,17]: Here, and are the Fourier transforms of timedomain signals and , respectively 1 , n % is the complex refractive index where the real part corresponds to a delay and the imaginary part to an absorption in the material, d is the thickness of the sample that must be measured, and ω is the angular frequency. The term is the product of the Fresnel coefficients at normal incidence for the two air/material interfaces: Finally, is a term taking into account the Fabry-Pérot multiple reflections in the sample [18]: It has to be noted that using as in eq. (3), may be a problem if not all the echoes above the noise floor of the TDS experiments are recorded (i.e. time-trace shorter than full signal). In fact, since the FFT algorithm performs a periodization of the signal in the time domain, in such a case the model will fold the echoes back to the beginning of the time-trace. To avoid such effect one can replace the expression in eq. (3) simply by the sum of the first terms of the FP series as shown in [19]. Equation (1) sets the so-called "forward problem": knowing and one can obtain . Since the experiment gives and , the actual interest is the "inverse problem", that is, with knowledge of and , one can determine . To the best of our knowledge, this problem can be solved analytically only by ignoring the Fabry-Pérot term (by, for example, using a temporal filter) and only for a sample without absorption. Since these assumptions imply that there is no phase term in the transmission coefficients in (2), the method consists of the extraction of the unwrapped phase from the THz-TDS data in the frequency domain, and then dividing by the frequency to obtain the refractive index. Another implication of these assumptions is that there are no losses during the pulse propagation-hence one can retrieve the transmission as a function of frequency and solve the second-order equation from (2) to retrieve the refractive index. As a consequence of the aforementioned assumptions, this method is limited only to optical-thick (nd > 1.5 mm), non-absorbing samples. Nevertheless, one can iterate this process by determining the real part of the refractive index from the unwrapped phase, and then compensate the difference of the losses by adding an imaginary term to the refractive index. Then, one has to compensate the phase term in the transmission due to this imaginary part-thus returning to the beginning of the loop. This iterative method is a good starting point; however, it does not guarantee the convergence or any reliability of the obtained results. Since this technique is intrinsically noncausal (the Kramers-Kroenig relation is not fulfilled), there is room for improvement by solving this inverse problem with a numerical approach. To do so, one should define an error function which must be minimized. This error function could be defined as [4]: where δρ is the modulus error and δφ is the phase error between the modelled transmission coefficient and the measured one . Ψ is a weighting coefficient enabling the addition of the phase and modulus errors, its value is usually set to 1 amplitude unit/rad. The error function is defined for each frequency. Since the refractive index is related to the phase, special care must be taken when calculating δφ. The measured phase is calculated as the unwrapped phase of meas T % and the modelled phase is calculated as the following: Once the error function is defined, a minimization algorithm must be implemented. For example, one can use the simplex method (gradient free method) [20] or a quasi-Newton algorithm [21,22]. The parameter search has to be done for every single frequency and gives as a result the real and the imaginary parts of the refractive index, and , respectively. This method is fast and works regardless of whether the Fabry-Pérot effect is taken into account. However, the result does not respect causality (which takes the form of the Kramers-Kroenig relations in this problem). This creates a significant issue during the unwrapping step, which strongly depends on the dynamic range. It has been shown that it is possible to partially solve this problem by including a correction to the unwrapped measured phase using partial Kramers-Kroenig relations [23]. Moreover, in the error function (4), both the modulus and the phase errors have the same weight. This is arbitrary and any choice of weighting other than 1 amplitude unit/rad could improve or diminish the efficiency and accuracy of the algorithm. Both the iterative and the optimization techniques show good results, but are limited-one still needs a precise measurement of the thickness or the implementation of an additional optimization step [24]. In addition, a low dynamic range of the measured data, or a strong absorption in the sample, leads to difficulties in obtaining the refractive index following the Kramers-Kroenig relations. This is due to the fact that the phase is lost in the frequency range where the signal value is below the noise level [23], which implies an additional step while performing the phase unwrapping. This phase includes additional assumptions in the number and the shape of the absorption peaks. Furthermore, the arbitrary weighting between the phase and the amplitude, as well as an arbitrary limitation of the bandwidth to avoid the range of low dynamic range, limits the robustness and thus the expansion of the optimization method. Nevertheless, with the refractive indices retrieved by these techniques, it is possible to extract further information about the material itself. For this purpose, the refractive indices are fit using models such as the Drude-Lorentz model [18,25]. One can then gain insight into the physical properties of the material under study, such as electronic or vibrational resonances. Knowledge of these parameters is of prime importance for material identification. This is, for instance, one of the most promising THz applications for drug component quality control or anti-counterfeiting measures. Most approaches focus on the intensity and the resonance frequencies while some take into account the linewidth [25]. Such parameter retrievals were achieved, for example, with the help of an optimization routine [26,27], such as genetic algorithms [28], used in the frequency domain. Generally, the principle steps are the following: (i) performing the experiments with and without samples; (ii) computing the Fourier transform; (iii) measuring the thickness of the sample; (iv) extracting the real and/or imaginary part of the refractive index, and (v) fitting the refractive index to obtain the material parameters. This process has shown promising results, but has several drawbacks as previously described. In this work, we present a robust and generic optimization approach [29,30]. The Fit@TDS software is based on a direct comparison of the initial time-domain data of the measured THz pulse with a mode similarly to the work presented to measure thickness of paint layer by Van Mechelen et al. [19]. We introduce this software and show that it enables one to model both simple materials, such as silicon; as well as more complex ones, such as carbohydrates and even metasurfaces. The remainder of this article is organized as follows. The basics of the implemented optimization is explained in section II. The two different models used are briefly described in section III. Sections IV and V present an analysis of the method's performance on fictitious and real samples, respectively. II. OPTIMIZATION PROBLEM The optimization problem is a problem of finding the best solution from all the feasible solutions. In the studied case, it starts with two items: 1. A set of data containing the time-traces with and without a sample (2 traces); 2. A model depending on the set of parameters depicting how the sample transforms the reference pulse into the modeled one, . Concretely, an example of a model for a doped semiconductor sample measured in transmission would transform to simply by convoluting with , calculated by introducing the Drude-Lorentz model equation in the refractive index in (1) (see Sec. III for details). Then, the objective function to minimize is set as the L 2 norm (square root of the sum of the square of the differences) of the difference between the modelled pulse and the measured (sample) one: This function will vary upon the value of parameters of the chosen model and the goal of the optimization is to determine the set of parameters that minimizes the objective function. An important remark here is that the function we are minimizing is proportional to electromagnetic energy. The fact that the L 2 residual error is an intuitive physical quantity will help the user in interpreting the results, and to understand any discrepancies in either the experiment or the model. This will, for instance, facilitate the understanding of any divergence or convergence of the fit algorithm to some local minimum, or to (in)validate the choice of the model during the optimization. The other practical advantage of this formulation is given by Parseval's theorem, which states that the norm of a function is the same as the norm of its Fourier transform, meaning: This is extremely convenient, allowing the calculation of the objective function in both time and frequency domains without performing a Fourier transform at each iteration. Specifically, since, to our knowledge, all the refractive index models are defined in the frequency domain, this formula allows one to perform the time-domain optimization while computing the objective function in the frequency domain. III. MODELS To perform the optimization, we coded the methods as given in repository from Ref. [31] in a mainstream language (Python) that allows the use of different optimization libraries. Since our goal is to offer a broad tool to the community, that is usable regardless of the type of sample under study or the subsequent problem to solve, we implemented a function from a research library [32] that includes many different optimization algorithms-giving users the opportunity to choose the ideal one for their samples. For the choice of the algorithm itself, we chose the versatile augmented Lagrangian particle swarm optimizer (ALPSO) [33]. This algorithm is an improvement on the basic "swarm particle optimization" approach used in constrained engineering design and the optimization problem. We implemented two different models: one for solid materials and one for metamaterials. In both cases, we assume that the sample is made only of one layer-meaning two interfaces added to a propagating medium. A. Multiple Drude-Lorentz Model Bulk solid materials play a role in the pulse propagation simply due to their refractive index in equation (1) and in the Fabry-Pérot term therein. We implemented the Drude-Lorentz model (the most commonly used model for solid samples), which defines the dielectric permittivity of a sample as a set of electronic resonators (matrix vibrations, oscillating charges, etc.) and leads to the following permittivity function: where is the dielectric permittivity at high frequency compared to the range of interest, is the plasma frequency, is the damping rate, k max is the number of considered oscillators, , and are the resonant frequency, the damping rate and the strength (expressed in permittivity units) of the k th oscillator, respectively. This formula will be useful to model the phonon line in a semiconductor or in a molecular crystal in the THz range. To summarize, in the case of a single uniform layer, the propagation will be modeled through the Fresnel coefficients and the multiple Drude-Lorentz oscillator model using parameters. B. Metasurfaces For a metamaterial, the model is different because one has to take into account (i) the refractive index of the material used to build the metamaterial, and (ii) the interaction of the THz pulse with the metamaterial structure itself. As often in any physical model, one can choose a macroscopic approach or a microscopic one. Since a metamaterial is a macroscopic concept the first approach is the most used: one model the metamaterial in terms of effective refractive index (permittivity and permeability …). Such an approach is especially useful for metamaterial based applications but has the drawback of being less intuitive when it comes to optimization of the design or the fabrication. A time-domain coupled-mode theory (TDCMT) model [34] of the resonator constituent of the metamaterial has been adopted so as to provide more insight in the physics of the device. This approach has been used, for instance, to model an integrated photonic resonator [34] or, in the free-space case (which is closer to the scope of this work), a photonic crystal in a solar cell [35]. Thus, it is fully appropriate for a metasurface built out of resonators, such as split ring resonators (SRR). We derived a similar equation as found in Ref. [34] with the addition of the reflection and transmission at the air/sample interfaces, and the result gives: where and are the reflection and transmission coefficients (shown in FIG. 2) following the Fresnel law, ω 0 is the resonant frequency of the mode, τ 0 is the characteristic time for absorption losses, and τ e , τ e1 , and τ e2 are the characteristic times for external losses in total, toward direction 1 (incoming) and toward direction 2 (outgoing), respectively. Note that an additional hypothesis arises here: we assume that the metal of the metasurface does not influence the transmission or reflection outside of the resonance spectral ranges-meaning that the filling factor (area ratio) of the metal must be very low. (9). Equation (9) will play a role in changing the transmission term: and in the Fabry-Pérot term can then be written as: To summarize, in the case of a metasurface, the substrate will be modelled using the same Drude-Lorentz parameters with five additional parameters to take into account the resonant nature of the transmission and the reflection at the metasurface interfaces (frequency and, internal and external losses). It has to be noticed that this model is well suited for infinitely thin metasurface as those based on metallic structures. In the case of relatively thick dielectric metasurfaces [36,37] one would have to modify the model to introduce the two interfaces before and after the metamaterial layer. C. Implementation The proposed models can involve more than thirty parameters for complex samples, thus an exhaustive error calculation to reach the global maximum for each set of parameters is too demanding. As stated above, the strategy we use is to implement an optimization algorithm to solve this nonlinear problem. A tremendous number of algorithms have been developed-giving birth to several fields of research. We implemented a library offering several optimization algorithms (the Python-based optimization package called PyOpt [32]), which is designed to formulate and solve nonlinear constrained optimization problems. The main advantage of this package is that it includes 20 different optimization algorithms, allowing users of Fit@TDS to change the algorithm for one that is more efficient for his or her specific problem. In addition, PyOpt allows parallelization, which is extremely useful for diminishing computation time. Since the problem is to find a global maximum rather than a local one, we implement the optimized particle swarm routine ALPSO amongst the proposed algorithms. The particle swarm algorithm is a versatile meta-heuristic method that makes no or few assumptions about the problem being optimized, and can search very large spaces of candidate solutions [38,39]. The drawback of this choice is that it is not optimized in terms of computation time for our specific problem(s). Of course, this could be improved in the future with better-suited algorithms. SIMULATED FACTITIOUS SAMPLES To validate and assess the performance of the proposed methods we first used a simulated sample. To do so, we recorded the reference spectrum with the TeraSmart THz-TDS spectrometer by Menlo Systems GmbH (1000 accumulations) [40] and then numerically simulated the response of the system with a sample using the equations described above. The data without a sample were windowed at the end of the time-trace on a segment length corresponding to the time delay introduced by the sample to remove folding effects due to the periodicity of the FFT. For the data with the simulated sample, we convolved the measured pulse by the transfer function of equation (1) with the Drude-Lorentz model to define the permittivity. Then, we added Gaussian white noise to the time-trace with a magnitude equivalent to the level of the high frequency noise one can observe in FIG. 3 (5×10 -5 of the amplitude unit used, and approximately -90 dB compared to maximum power spectral density). We used these two sets of time-trace data as the input to Fit@TDS. The spectral data clearly features a dip at the frequency of the oscillator. Then, we implemented our software Fit@TDS, with the ALPSO algorithm using the swarm-size of 1000, 6 inner iterations and 20 outer iterations. The bounds for the thickness were ±1% around 5 mm. The bounds for the four other parameters were -50% and +100% of the parameter value. Although we optimized neither the choice of the algorithm, nor the parameters (swarm size, inner and outer iterations, etc.), the software required about 1 minute on a common personal computer (Intel® Core™ i7-5600U CPU @ 2.60 GHz) to retrieve the parameters from a 300-point data set. In fact, the computation time strongly depends on the size of the parameters space, and thus on the bounds we specify. In the difference between the fit and the simulated data shown in FIG. 3, one can see an extremely small discrepancy between the targeted time-trace and the fitted one. This discrepancy comes from the noise added to the simulated sample time-trace, meaning that the algorithm converged and that the L 2 residual error is simply the L 2 of the added noise. The results for the Drude-Lorentz parameters are very close to the targeted ones, the relative error is ; ; ; and for the thickness discrepancy. The error of the thickness should be compared to the delay line uncertainty: The errors on  and Δε are to be compared to: Which correspond to the sampling time of our THz-TDS setup. The errors on ω 0 and γ are to be compared to the frequency sampling: All results show that for a Lorentz-modelled time-trace that is not limited by noise (see Eq. 15), the proposed methods and the software are validated and reach the target precision. To better evaluate the actual performance of the method on real data, we tested its robustness by increasing the amount of noise added to both time-traces with and without a sample. Specifically, the amplitude of the Gaussian white noise was increased from 5×10 -5 up to 5 amplitude units, corresponding to a dynamic range of 105 dB to 5 dB. Then, the algorithm was used to retrieve the parameters using the same bounds as in the previous case. The relative error of the parameters as a function of the applied noise level is shown in FIG. 5. The L 2 residual error was not shown, since it always corresponds to the added noise (written here as the noise floor). B. Dependency of the error on the dynamic range Firstly, one can see that, globally the errors increase with the applied noise, as expected. Secondly, the relative error follows a trend between proportional to the amplitude and proportional to the energy, as indicated by the solid gray lines in FIG. 5. Thirdly, the relative uncertainty is larger for γ and Δε simply because those parameters are intrinsically smaller. Finally, one can compare the retrieval precision with the value mentioned above and plotted in dotted lines on the same figure. Clearly, when the noise is low, we exceed the targeted precision for each parameter (i.e. the software yields better-than-expected results). Furthermore, we reach a precision below 1% for a noise floor of -40 dB. This level of noise is in fact the "single shot" noise in our experiments (meaning one scan by the THz-TDS delay line taking ~ 50 ms of acquisition time). This result implies that the method is robust enough to follow the parameters of the one-oscillator Drude-Lorentz model in real time with an data with a noise level corresponding to experiments at 20 Hz repetition (video frame time). This will enable one, for instance, to follow the temporal evolution of a parameter extracted from the model (typically the width of an oscillator may depend on temperature) at this time frame. The previous results show a very accurate retrieval of the frequency parameters even in the presence of strong noise. However, this does not mean that one would be able to discriminate a doublet: two different peaks separated by a frequency that is small compared to their resonant frequency ν 1 and ν 2 (resolution). The criterion that determines whether we are able to discriminate between two neighbouring peaks is simple: the L 2 residual error of the fit given by a twooscillator model must be significantly smaller than that of a one-oscillator model. To test the performance of our methods regarding this criterion, we simulated a fictitious sample with the same parameters as in section A, on which we added an oscillator at slightly higher frequency. Then, we fit the data coming from this fictitious sample using first one oscillator and then two oscillators. The results as a function of the separation between the two oscillators, δν, are shown in FIG. 6. C. Resolution test One can see that the resolution of our fit is clearly below the width, δω, of the peaks themselves. However, we are limited by the temporal window of the experiments (100 ps). With the time-domain fit, a doublet will give a beat note, meaning a low frequency envelop on a high frequency carrier wave. Thus, if the dynamic range is high enough, the only limitation will be the time window of the experiment, which should be long enough to detect the variations of the envelope. Since the peaks have a finite size, the envelope will also be damped. Consequently, it will be lower than the noise. From these considerations, one can derive the following equation giving the optimal time window: where is the noise amplitude value in the frequency domain, δf is the full frequency range, and is the depth of the considered peak in the frequency domain. It is important to note that and are power spectral densities and thus are expressed in "squared-amplitude per Hertz" units. Finally, it is important to note that the term δf is in fact the inverse of the sampling time. To summarize, as expected, the resolution of the proposed method is higher than the width of the peak and is still is limited by the frequency resolution of the discrete Fourier transform. To validate the method with a more complex sample, we simulated one fictitious sample (like a carbohydrate) made of six oscillators with parameters given in Table 1 From the spectrum, one can see four dips. An additional absorption feature can be seen in the retrieved refractive index around 0.5 THz, corresponding to a fifth dip. If one intends to directly find the result, five dips requires 18 fit parameters, which corresponds to an enormous parameter space (e.g. taking only 10 values for each parameter would mean 10 18 sets to test). Consequently, we began by fitting the two strongest oscillators (the one leading to the dip slightly above 1 THz and the one around 2.5 THz) and then adding the next oscillators one by one to strongly diminish the volume of possibilities. The new oscillators were added by picking the ones corresponding to the most significant residual error in the spectrum [as shown in Fig. 8 (B)]-leading to a decrease of the error as shown in Table 2. D. Validation with multiple-oscillator Drude-Lorentz model Step Here, one can see that the residual error decreases step by step with additional oscillators. Moreover, in FIG. 8 (B) it is clear that the addition of a new oscillator decreases the residual error in the frequency range where the oscillator is added. After step #4, a residual error remains in the region around 1 THz. Thus, we added a sixth oscillator that strongly decreased the residual error. This is due to the fact that, in this region, two oscillators were present in the model (Table 1). Finally, because convergence was not fully obtained, we performed the sixth and seventh steps to refine the precision by fitting in a more constrained parameter space than used previously. This resulted in a precision and resolution similar to the ones obtain in FIG. 5 for each oscillator. Indeed, the higher the oscillator frequency, the lower the signal, and therefore the lower the precision. These results show that the method is not only valid for one or two oscillators but for a set of up to six oscillators, even if two of those oscillators are close together in frequency similarly to the example of FIG. 6. To summarize the first validations on fictitious samples, we proved that Fit@TDS enables the retrieval of the oneoscillator Drude-Lorentz model very accurately, even in a presence of a noise comparable to a single-shot video timeframe measurement in a commercial THz-TDS system. We showed that the proposed method is capable of identifying two different peaks of the two-oscillator Drude-Lorentz model with a resolution power that depends on the total time span of the recording, and not on the width of the peaks (as long as the measurements are not noise limited). We derived an expression (Eq. 15) to predict the optimum time span for THz-TDS measurements. Finally, we tested Fit@TDS on an example sample including six oscillators adopting an iterative use of the method. The results we obtained were at the same precision and resolution as those for a fewer number oscillators. The recursive approach we used for this fictitious sample allowed keeping a reasonable computational load while adding up to seven oscillators. The idea is to use the information remaining in the residual error to improve the model by adding oscillators in the vicinity of a clear bump in the residual error. Nevertheless, it is important to test our method on real samples. For instance, with such samples, we are given the ultimate precision of the sample thickness, as well as the other parameters. In real cases, the thickness will be measured at some relative precision that may vary along the surface of the sample. In addition, the other parameters must be inferred from experiments and not from an artificial input. V. VALIDATION WITH REAL SAMPLES After testing the method on fictitious samples, it was used on real samples. First, on two high-resistivity 5-mm-thick silicon wafers, then on a lactose pellet, and finally on a metasurface made of gold split-ring resonators on a quartz substrate. A. Silicon wafer sample High-resistivity silicon is a typical reference material for THz-TDS. Several historical studies have been published since the emergence of the THz-TDS field [4,41,16,42], hence this type of sample is a good starting point to test our method. A float-zone high-resistivity (> 10 kΩ·cm) silicon wafer with a thickness of 5 mm ± 1% was purchased from Sil'tronix Silicon Technologies. We measured the thickness with a digital thickness comparator from Mitutoyo to be 5016 ± 4 µm. We note that this value is likely overestimated, since any imperfections on the wafer or dust between the wafer and the marble plate will create additional thickness. We performed the THz-TDS measurement at a temperature of 23 ± 1°C, in a timing window of 560 ps with steps of 33 fs. Before plotting and treating the data, we bandpass filtered them with a box car frequency profile below 160 GHz to remove spurious parasitic modes. The time-traces and the corresponding spectra are presented in FIGS. 9 and 10, respectively. Such samples are usually modeled using the Drude model [43]. Consequently, we fitted the data using first a nondispersive permittivity (constant index), then a pure Drude model, and finally the Drude-Lorentz model with an oscillator to take into account the low energy branch of phonons [44,45]. The complex refractive indices for all three models are plotted in FIG. 11 and the corresponding parameters in Table 3. Firstly, from FIGS. 9 and 10, we conclude that the fit worked fairly well for all three models, as is confirmed by the small value of residual error listed in Table 3. Secondly, one can see in the time domain that the temporal position of the residual error is clearly physical and thus can be interpreted. The first residual error from 0 ps is an artifact due to the box car filtering, and can be seen at lower magnitudes close to the end of the time-trace. Then the main source of residual error is perfectly time-correlated with the recorded pulses, and does not show any specific spectral feature-meaning that the models do not perfectly fit the experiments. This is the case for constant index and Drude model since one can see a real improvement by implementing the Drude-Lorentz model. In this case, the fact that the frequency domain residual error follows the frequency shape of the pulse fairly well, and that an oscillation at the Fabry-Pérot period is observed in the frequency domain, may indicate again that the model may not be fully sufficient. In fact, more sophisticated models have already been drawn for silicon in the THz domain, as in Ref. [46], for example, which is based on microscopic transport on which one can add a Lorentz oscillator to take into account low-energy phonons or absorption due to impurities. To go a step further, the refractive indices retrieved by timedomain and frequency-domain methods were plotted in FIG. 11. We empathize that the thickness used for the fit in the frequency domain was the one extracted from the fit in the time domain (4.99908 mm), which is in good agreement with the comparator measurement. We also performed the fit with the measured thickness, but obtained a shift of -0.01 in the refractive index, and more importantly additional noise (which manifested as oscillations in the refractive index of amplitude ~1.5×10 -3 ). These facts indicate that the measured thickness for three points on the wafer does not correspond to its average thickness. Here, it is clear that the time domain model is in total agreement with the frequency domain model-illustrating the coherence between the two methods. Moreover, our method is very precise and yields the effective thickness of the sample, which is extremely important for the study. As for the fit results using the Drude-Lorentz model, they agree with the literature on the global value for refractive index [16]. Additionally, both the frequency and the width of the Lorentz oscillator tend to be very close to the one reported in Ref. [45]-showing the reliability of the method. To conclude, we were able to predict the refractive index and absorption of a high-resistivity silicon wafer with a high precision. Considering the effect of jitter in the time-domain measurements in Eq. (13), and the fact that we did not measure the refractive index of nitrogen gas in the band that may induce systematic error, we trust the three first digits of this refractive index. Exceeding this level of accuracy would require additional stabilization (e.g. in temperature) and measurements that are beyond the scope of this paper. Nevertheless, we obtained values very close to the metrological one in the literature with a faster and simpler method, which is more than sufficient for the majority of applications. B. Lactose pellet If silicon is a perfect first example, it does not have many features to be fitted in the THz range. Thus, we used the methods on a pellet of lactose monohydrate (CAS 5989-81-1) powder with purity ≥ 99% total lactose basis (determined by gas chromatography) purchased at Sigma-Aldrich. Due to the fragility of the pellet, it was extremely difficult to measure the thickness. Nevertheless, we measured roughly 900 µm with an uncertainty of ± 20 µm. To fit the data, we followed the methodology used in section IV. We first analyzed the transmission data and found two strong peaks and numerous Fabry-Pérot oscillations. Hence, we fit the data using the twooscillator model. The resulting L 2 residual error exhibited two additional oscillators, as well as some losses at high frequency. Consequently, we added three oscillators to take into account these features. The fit results are shown in FIG. 12 superimposed with frequency-domain fitted refractive indices. Table 4 lists the resulting fit metrics. Again, because the thickness predicted by the time-domain fit was more reliable than that obtained with comparator measurements, we used the former to retrieve the refractive indices in the frequency domain. Frequency domain and time domain indices are in good agreement at low frequency. However, from the highest peak around 1.37 THz up to the end of the spectral band, an important discrepancy was found due to a strong absorption peak. Evidently, the phase is lost in the frequency domain and thus the algorithm is not able to unwrap it properly [23]. This example clearly shows the robustness of the proposed method considering this issue. To go a step further, one can see that three absorption peaks are distinguishable with corresponding a feature in the real part of the refractive index. The peaks at 0.53 THz [47,48], 1.17 THz [49,50] and 1.37 THz [51,52] are the characteristic absorption peaks of α-lactose monohydrate [10,49,53]. The peak at 1.81 THz is more difficult to measure (higher frequency and in the vicinity of a water line), and is therefore less prominent in the literature. However, it was reported in Ref. [52] and may be due to the presence of an anhydrous phase [54]. Additionally, the retrieved value for η becomes unreliable after the strongest absorption peak at 1.37 THz. At this frequency, the signal decreases to the level of the noise, and therefore the phase becomes extremely noisy-leading to an error in the phase unwrapping. Finally, the shape of highfrequency losses does not match the shoulder of the hypothetical strong-infrared Lorentzian absorption. Thus, we tentatively attributed this last feature to losses due to the scattering in the pellet made of nano-crystallite powder. To summarize, we were able to retrieve the frequency, the width and the oscillator strength of four oscillators in the range of 0.2 to 2.5 THz. Further improvements could be made by including scattering in the time-domain model, as has been done in the frequency domain [25]. To go a step further, we used Fit@TDS on an artificially structured material called a metasurface. Metasurfaces are spreading their use in the THz range, for example as narrowband terahertz modulators [55] or light-matter interaction enhancers [56]. Having precise insight of the properties of such components will accelerate the optimization of the fabrication process and allow the design of improved metasurfaces. Thus, we fabricated a metasurface made of a split-ring resonator (200 nm of Au and 20 nm of Ti) on top of a 200 µm-thick crystalline quartz substrate (z-cut) using electron-beam lithography and lift off. A scanning-electron microscope (SEM) image of the metasurface is shown in FIG. 13. C. Metamaterial on quartz substrate The filling factor of the quartz covered by metal is ~7%, corresponding to our previous hypothesis (see Sec. III.B for details). To perform the experiments, we tied the samples on top of a 1-mm-thick quartz substrate to prevent any interference between the near-field modes of the metasurface and the back interface of the thin substrate. The samples were then measured and fitted. As a first step the thickness and refractive index of the quartz without metasurface was retrieved to compare with the one found in the fit of the metasurface. Both results are shown in Fig. 14. There, one can see that the fit succeeds with good precision (9.2% and 10.7% residual error, respectively). From the temporal residual error, we deduce that the small residual error arises from two different effects. First, two temporal pulses are not fitted by the model around 27 and 37 ps. Those two peaks correspond to unwanted reflections at the thick / thin quartz substrate, undoubtedly due to a thin air layer between them (the peak at 27 ps corresponds to propagation through ~ 211 µm of quartz, while the one at 37 ps to corresponds to 973 µm). This accounts for most of the residual error (~ 70%). Second, there remains a residual error that is temporally correlated with the main peak. No specific spectral feature is responsible for this residual error when analyzing the Fourier transform of this peak. We attribute this to the modification due to the previously reported effect. Indeed, since energy appears in other peaks, it must be removed from the main one. Despite these effects, the software is able to retrieve the parameters of the metasurface as shown in Table 5. We note that these parameters are coherent with the material's refractive index. Additionally, the total quality factor Q tot corresponds the one found by dividing the frequency by the width of the transmission deep. Both of these points confirm that the method produces reliable results. It also allows one to go a step further in making the difference between external losses (75%) and absorption losses (25%), which is close to the value usually found in simulations. Finally, since the metasurfaces are processed on a quartz/air interface, it is clearly non-symmetric. The coupling originating from the quartz is about a factor of three larger than that from air. This is again expected from momentum conservation, and corresponds to the depths of the observed peak as demonstrated in Ref. [57]. Sample Quartz Metasurface To summarize, Fit@TDS enabled us to retrieve all of the parameters of a resonant mode of a metasurface -showing that not only can material parameters be retrieved using our method but also photonic parameters. In addition, the residual error of the fit was clearly related to experimental perturbations, which can help a user to understand their experiment. VI. CONCLUSION In this paper, we presented a new method, and an associated software, to fit THz-TDS data with material and metamaterial models based on time-domain fits. The goal of the software is to provide an improved, robust tool that gives more precise insight into an unknown sample, as well as to accelerate the analysis of a known sample, for instance, in the case of quality control. This software is freely available, and we provide a link to the source code in Ref. [31]. We first explained the methods, then tested it on fictitious samples and finally on real ones: a semiconductor, a molecular crystal and a metamaterial. Compared to other available software and methods, Fit@TDS has five main advantages. (i) A precise measurement of the thickness of the sample is not needed since it plays the same role as the other fit parameters. In fact, obtaining a precise sample thickness for materials such as carbohydrates or semiconductor wafers at sub-micrometer precision is challenging. Therefore, avoiding this step is a real improvement. (ii) We analyze the refractive index modelling problem as a whole, and thus we have only a small number of parameters for the fit compared to the usual two values per frequency, we are much less sensitive to the noise. This enables us to reach very high precision on the refractive index (< 10-3 for the silicon wafer, due to the set up limitations). (iii) Since the residual fit error is in amplitude units, one can clearly make interpretations of this error that lead to better understanding of experiments and possible oversights of the model implemented. In particular, this can reveal imperfections in the experiments, as we demonstrated on the quartz/metasurface sample. (iv) Since we are fitting in the time domain, the phase is not lost in the presence of strong absorption, and an additional step is not needed [23], as shown with our experiments on lactose. (v) Finally, it allows precise, reliable and consistent retrieval of material parameters using the Drude-Lorentz model, but also those of metamaterials with TDCMT-giving access to the internal and external losses of the metamaterial, as well as the coupling directivity. Indeed, this method is not limited to THz-TDS systems and can be applied to any time-domain spectroscopic system if one has access to the electric or magnetic field. For instance, implementing this method with a dual-comb spectroscopy system (see Ref. [58] for a review of Asynchronous optical sampling ASOPS) would allow one to the follow of the fit parameters in real time at a 20 kHz rate. To further enlarge the scope of applications, it is possible to simply change the model in the open-source code of the software. It will then be possible to simulate other materials, for example, taking into account scattering [25], using Debye-derived models for liquid or impregnated samples [59], or even combine it further with mixture identification methods [60]. Alternatively, as a first step, one can simply implement the oblique incidence or the modeling of measurements using a reflection setup. Additionally, since the software enables one to simulate the photonic part, it will be possible to implement circuits made of THz resonators [61], or any other THz photonic component involving dispersive elements [53]. Since the residual error of the parameters reach the THz-TDS set up noise limitation, an improvement of the performance of the software would require an increase in the sensitivity of THz-TDS systems. For instance, several groups are working on producing emission antenna able to deliver more powerful and broadband pulses [62], as well as improved detection systems [63]. Furthermore, it would be very helpful to minimize the post-pulse emission of the antenna. This would allow the use of additional noise-reduction techniques [64], and thus improve the bandwidth and the precision of the fit. To conclude, we hope that the community will make use of Fit@TDS and implement additional features corresponding to their needs. For example, one could imagine implementing a model to determine the concentrations of a known gas mixture, or of doping or impurity concentrations in a known material. Overall, because Fit@TDS functions on common personal computers and operating systems, we anticipate that it will become a valuable tool for the community and will help the spread of THz-TDS to new fields of research.
Relativistic Normal Coupled-cluster Theory Analysis of Second- and Third-order Electric Polarizabilities of Zn I We present precise values of electric polarizabilities for the ground state of Zn due to second-order dipole and quadrupole interactions, and due to third-order dipole-quadrupole interactions. These quantities are evaluated in the linear response theory framework by employing a relativistic version of the normal coupled-cluster (NCC) method. The calculated dipole polarizability value is compared with available experimental and other theoretical results including those are obtained using the ordinary coupled-cluster (CC) methods in both finite-field and expectation value evaluation approaches. We also give a term-by-term comparison of contributions from our CC and NCC calculations in order to show differences in the results from these two methods. Moreover, we present results from other lower-order methods to understand the role of electron correlation effects in the determination of the above quantities. A machine learning based scheme to generate optimized basis functions for atomic calculations is developed and applied here. From the analysis of the dipole polarizability result, accuracy of the calculated quadrupole and third-order polarizability values are ascertained, for which no experimental values are currently available. I. INTRODUCTION Atoms are spherically symmetric, but under the influence of stray electric fields, distribution of their electric charges are deformed [1,2]. In the ground state of a closed-shell atom, the first-order energy shift due to weak electric field vanishes and the leading order contributions to the energy shifts come from the second-order followed by third-order effects [3,4]. These contributions are usually factorised into powers of electric field strength and in terms of electric polarizabilities that are atomic state dependent but independent of the applied electric field strength [1,5]. With the knowledge of these polarizabilities, it is possible to estimate energy shifts in an atomic system for an arbitrary weak electric field. As a result, there has been immense interest to study electric polarizabilities both experimentally and theoretically [6,7]. Among them, the electric dipole polarizability (α d ) has been studied extensively due to its predominant contribution to the energy shift, followed by electric quadrupole polarizability (α q ), while third-order polarizability (B) has received very little attention. The α d values of Group IIB elements Zn, Cd and Hg have been measured accurately [8][9][10] and theoretical calculations based on sophisticated many-body methods agree with the experimental values for the Zn and Hg atoms [9,[11][12][13][14][15][16][17][18]. However, many calculations show significant deviations from the experimental values for the Cd atom [13][14][15]. More precise knowledge of polarizabilities in these atoms is quite useful for several fundamental applications. For example, the Cd and Hg atoms from this group are being used as candidates in atomic clocks [19,20] and a precise knowledge of polarizabilities in these atoms will be required to estimate systematic effects of the atomic clocks. Hg is used to probe violation of parity and time-reversal symmetry violating interactions [21,22]. As an important application, the knowledge of α d , α q and B values are needed to construct the polarization potential seen by an external electron (or positron) in the vicinity of an atom in the scattering physics problem as given by [23,24] With the advent of modern technologies, highprecision measurements of energy shifts due to external electric fields are feasible, from which precise values of higher-order polarizabilities can be inferred. However, there are two important objectives behind pursuing theoretical studies of the electric polarizabilities of atomic systems. First, it helps demonstrate the validity of a theoretical approach by reproducing the experimental result and is able to provide insights into the behavior of electron correlation effects within the investigated atomic system. Secondly, but more importantly, the reason behind performing theoretical studies of electric polarizabilities by developing and applying sophisticated manybody methods is to provide their accurate values in systems where experimental results are not available, so as to guide future measurements of these quantities. As mentioned above, a number of calculations of α d for Zn have been performed [9,[11][12][13][14][15]17] but there does not exist any measurement of α q and B. Only a few theoretical studies of these quantities based on the non-relativistic formalism are available [9]. Owing to significant challenges, one is yet to see the experimental determination of B values. It is therefore interesting to study the role of electron correlation effects by evaluating α q and B. Among the commonly used atomic many-body methods for calculating spectroscopic properties, coupledcluster (CC) theory is proven to be one of the most reliable and powerful methods [25][26][27]. The CC theory has also been applied to nuclear, molecular and condensed matter systems to account for correlation effects among the subatomic particles accurately [28][29][30] and hence, this theory is being treated as the gold standard for manybody methods. Within the CC formalism, a variety of procedures are being proposed and implemented to evaluate various spectroscopic properties reliably [25,29]. Several calculations of atomic polarizabilities have been reported employing either the semi-empirical approaches [7,31] or the less accurate numerical approaches like the finite-field method [9,[11][12][13][14]. It is possible that semiempirical calculations can give reasonably accurate results because a part of uncertainties are excluded by making use of some experimental quantities, but they cannot demonstrate the true potency of the employed many-body methods. Due to the odd-parity nature of the dipole operator, it is not convenient to include the dipole interaction Hamiltonian in the atomic calculation to estimate energy level shifts by adopting spherical symmetry properties of atomic systems. Thus, results from the finite-field approach are estimated using programs that have been developed for evaluating molecular properties by imposing additional constraints to describe atomic systems [32] in the Cartesian coordinate system [33]. By employing a spherical coordinate system, the interaction Hamiltonian due to the dipole interaction can be treated perturbatively. Though the quadrupole operator is an even parity operator, it cannot be added to the atomic Hamiltonian in order to evaluate the quadrupole polarizability owing to the fact that it is a finite rank operator. In this work, we have applied a linear response approach to determine the α d , α q and B values of Zn in the relativistic CC (RCC) theory framework by retaining the spherical symmetry properties of atoms. Earlier, we had employed this approach to determine α d of Zn using the expectation value approach of the RCC theory [15]. This approach involves a non-terminating series and contribution from the normalization of the wave function was not included in order to conveniently evaluate the expression by making use of the connecting terms. However, two different implementations of this method had produced quite different α d results for the Cd atom [15,17]. To avoid brute-force termination in the expression and ambiguity in accounting for the contribution from the normalization of the wave function, we have developed the relativistic normal coupled-cluster (RNCC) theory to determine the aforementioned quantities [34]. The RNCC method has been applied previously for the accurate determination of α d values for Xe, Cd and Hg atoms [34][35][36], resulting in a reconciliation between the theoretical and experimental polarizability values for the Cd atom [35]. This immediately prompted a re-analysis of the experimental data for this atom [37]. A recent calculation of α d of Cd using the finite-field approach in the RCC theory framework offers further support to these findings [38]. In view of this, it is imperative to carry out the polarizability calculations of Zn using the RNCC theory and make a comparative analysis with the cal-culation obtained using the expectation value approach of the RCC theory to understand better about both the methods. In this work, we have performed calculations of α d , α q and B using expectation value approach in the RCC and RNCC methods. We then compare results from both the approaches term-by-term. In addition, we give results from a lower-order method using the same basis functions in order to demonstrate the propagation of electron correlation effects at different levels of approximations in the many-body method. The results are given in atomic units (a.u.) unless otherwise stated explicitly. II. THEORY In the presence of a weak electric field E(r) with strength E 0 , the ground state energy level of Zn can be expressed in the perturbative approach as [3,9,39] where E (n) 0 denotes n th order correction to the energy with E (0) 0 as the ground state energy level of the free Zn atom and the first-order energy shift to the ground state of Zn is E (1) 0 = 0. When E(r) is generated from a charge Q e placed at a distance r, then the above equation is given by [9,24] On this basis, when an external charged particle like an electron or positron is seen in the vicinity of an atom, its polarization potential is constructed using Eq. (1). With the prior knowledge of atomic wave functions |Ψ (0) k and energies E (0) k of the free Zn atom with k representing the level of a state, we can evaluate α d , α q and B values using perturbative analysis as [24] and B = 2 where D and Q are the electric dipole and quadrupole operators, respectively. Since it is impractical to evaluate the complete set of |Ψ (0) k for the evaluation of the above quantities, they can be determined conveniently by expressing as [40][41][42] and where the first-order wave functions are defined as and Therefore, contributions from all the intermediate states in the sum-over-states to α d , α q and B can be accounted through the first-order wave functions by determining them as the solution of the following inhomogeneous equation in the ab initio framework with the atomic Hamiltonian H 0 and O denoting either D or Q. As discussed in the next section, these first-order wave functions are solved in the RCC and RNCC theory approaches in this work. A. Evaluation of properties In the relativistic framework, we consider the Dirac-Coulomb (DC) atomic Hamiltonian in the calculation which is given by where c is the speed of light, α and β are the Dirac matrices, V nuc is the nuclear potential and r ij the interelectronic separation between the electrons located at the r i and r j radial positions with respect to the center of the nucleus. We begin our calculations with the Dirac-Fock (DF) approximation (H 0 = H DF + V 0 with V 0 = H 0 − H DF for the DF Hamiltonian H DF ) and obtain the exact wave function by expressing as where |Φ 0 is the unperturbed DF wave function and the wave operator Ω (0) accounts for the contributions from V 0 . After including the external operator O, the firstorder perturbed wave function can be written as In the perturbative analysis, the unperturbed and perturbed effects are accounted by expressing [15,16,43,44] and where Ω (n,m) denotes inclusion of n th and m th order of V 0 and O ≡ D/Q operators in the calculations. In the many-body perturbation theory (MBPT method), the amplitudes of these wave operators can be determined using the generalized Bloch's equation [15,16,44] for each order of perturbation as given by Ω (n,m−r) P 0 OΩ (n,r−1) P 0 (17) by equating terms with the same order of perturbation from both the sides, where P 0 = |Φ 0 Φ 0 | and Q 0 = 1 − P 0 . To understand how electron correlation effects propagate from the lower-order level to the higherorder level of perturbation in the determination of polarizabilities, we consider one-and two-orders of V 0 in the second-order (MBPT(2)) and third-order (MBPT(3)) MBPT method respectively, and estimate the α d and α q values. It is obvious from here that the DF values of the polarizabilities can be obtained by considering zero-order of V 0 in the calculation. We also intend to verify the results by approximating Ω (0) ≈ Ω (0,0) = 1 and Ω (o,1) ≈ ∞ n=1 Ω (n,1) but accounting only the corepolarization effects in the random-phase approximation (RPA) framework [15]. The RPA as well as all-order contributions from the non-RPA effects can be captured simultaneously by the RCC theory [15,16,45], in which the unperturbed exact wave function is given by where T (0) accounts for electron correlation effects from V 0 . Analogously, the first-order perturbed wave function is given by where T (o,1) includes contributions from both V 0 and the perturbative operator O. In this approach, the expressions for α d , α q and B are given by [40][41][42] and Evaluating the above expressions involves two major challenges, even after making approximations in the level of excitations in the calculations. The first being that it has two non-terminating series in the numerator and denominator. The second that the numerator can have factors both connected and disconnected with the operators D or Q. These present practical problems in implementing and accounting for contributions from all these terms in a convincing manner. To address these problems partially, we approach the evaluation of the α d , α q and B values in a slightly different way as described below. Let us assume for the time being that the interaction operator O is a part of the atomic Hamiltonian and given by where λ = 1 and is introduced to keep track of the order of O in the calculations. The atomic wave function |Ψ of the above Hamiltonian in the RCC theory can be given by where |Φ is the modified DF wave function constructed in the presence of O with the corresponding electron excitation operatorT due to both V 0 and D, while T is also the electron excitation operator due to both V 0 and D but considering excitations from |Φ 0 . The expectation value of O using |Ψ can be mathematically given by Following Refs. [25,46], the above expression yields where the subscript c indicates that only connected terms can exist in the expression. Now expanding T in powers of λ as and retaining terms linear in λ in Eqs. (25) and (26), we get This expression is mathematically equivalent to the second-order polarizability expression. Therefore, the aforementioned polarizabilities can be evaluated in the expectation value evaluation approach of RCC theory as and Though not specified explicitly, the above expression for B is obtained by adding λ 1 D and λ 2 Q in the atomic Hamiltonian and equating terms in λ * 1 λ 2 and λ 1 λ * 2 from the expectation value expression given by Eq. (25), where λ 1 and λ 2 are two arbitrary complex parameters. Before we pursue the calculations of α d , α q and B using the above connected terms in the RCC theory, we intend to point out a few issues associated with these expressions. Although it removed the non-terminating series appearing in the denominator, it still contains a nonterminating series in the numerator. Again, the above derivations of the expressions were based on the assumption that no approximation was made in T , but the actual calculations are carried out after approximating it to a certain level of excitations. Thus, the cancellation of the normalization of the wave function may not be exact and it will slowly tend towards exact with inclusion of higher and higher-order terms gradually. This is also true in the case when external perturbation is not included in the atomic Hamiltonian and expectation value of an operator is evaluated using Eq. (26) in the RCC theory. Nonetheless, these problems can be circumvented in the RNCC method as discussed below. First, we want to make it clear that we shall approach in the same manner from Eq. (23) of RCC theory to derive expressions for polarizabilities in our RNCC theory. As in the usual approach of the RNCC theory, the ket state |Ψ is expressed as the ordinary RCC theory but in place of Ψ| a new bra state Ψ | is defined for H such that both Ψ| and Ψ | have the same eigenvalue for H and it satisfies the biorthogonal condition [29,47,48] Ψ |Ψ = 1. Due to the fact that Ψ| is constructed by the deexcitation RCC operator (T † ), the RNCC bra state is expressed as with a de-excitation operator Λ. It then obviously follows that To ensure that both Ψ| and Ψ | have the same eigenvalue for H, it is imperative to impose the condition whereH = e −T He T = (He T ) c is a terminating series. This leads to the amplitude determining equations for the T and Λ operators as and respectively, where |Φ * 0 is an excited state determinant with respect to |Φ 0 . Now, adopting the perturbative approach of Eq. (23), we can expand When D and Q are included simultaneously along with parameters λ 1 and λ 2 in Eq. (23), the Λ operator can be expanded in both λ 1 and λ 2 . Consequently, the RNCC expressions for α d , α q and B can be given by and At this juncture, we would like to mention that if the above RNCC theory derivations were made based on Eqs. (18) and (19) instead of starting the derivation from Eq. (24) then we would have got extra and (41) respectively. It implies that deriving the expressions for α d , α q and B from the expectation value equation given by Eq. (25) has a lot of computational advantages. Nonetheless, the final results should be independent of a theoretical approach provided an exact theory has been implemented in a consistent manner whereas in an approximated calculation, choice of implementation should be made judiciously in order to achieve more reliable results as well as a reduction in the computational cost. It is obvious from the above expressions for the polarizabilities in the RNCC theory that they are free from non-terminating series as appear in the RCC theory and normalization of the wave function naturally becomes one. Therefore, it is more practical to handle the calculations using the RNCC theory. It also removes the ambiguity regarding the non-exact cancellation between contributions from the normalization of the wave function and disconnected terms of the RCC theory in an approximated calculation. At present, we have considered the single and double excitations in the RCC theory (RCCSD method) and RNCC theory (RNCCSD method) by defining the electron excitation and de-excitation operators as and Another pertinent point that we would like to mention here is that considering the next-level excitations, i.e. triple excitations, in the RCC theory will be too challenging computationally as the number of terms will be quite large. Owing to the fact that both D and Q are one-body operators and that there is a constraint of having connected terms in the polarizability calculations, only a limited number of additional terms will appear in the above expressions if we intend to include higher-level excitations through the RNCC theory. This is why we may anticipate a significant difference between the results from RCCSD and RNCCSD methods, while these differences can be minimized with the inclusion of higherlevel excitations. Furthermore, the results will converge faster in the RNCC theory with the level of approximations compared to the RCC theory owing to the above constraint. To understand the differences between the results from the RCCSD and RNCCSD methods, we have given results from individual terms from both these methods and make a comparative analysis among them. It should also be noted that the MBPT(2) results are the lowest-order contribution of RPA. Therefore, we can explain the role of the lower-order and all-order core-polarization contributions to the determination of α d and α q by analysing the MBPT(2) and RPA results. Similarly, the MBPT(3) method introduces the lowest-order non-RPA contribution. The differences between the RPA and RCC results will represent the contributions due to non-core polarization effects to all-order and a comparison of this difference with the MBPT(3) results will give an idea about how important the non-core polarization effects are in the evaluation of α d and α q . Since evaluation of B depends on the first-order wave functions used in the determination of α d and α q , accuracy of B can be gauged from the accuracy of the calculated α d and α q values. B. Machine Learning based scheme for orbital optimization In the course of calculating accurate values of polarizabilities, it is necessary to use reliable single particle orbitals along with considering a powerful many-body method. There is a possibility that the method employed in a calculation is very accurate, but the results can still be bad due to use of poor quality single orbitals used in the construction of Slater determinants. In our approach, we need to know both the bound orbitals and continuum for pursuing the calculations. The bound orbitals can be obtained by solving differential equations, but it requires a different treatment to obtain the continuum. If two separate approaches are adopted to obtain the bound orbitals and continuum, then there can be orthogonality issues among them. Thus, we prefer to generate atomic orbitals using a single procedure by imposing orthonormality conditions among them. The single particle orbitals are given in the DF method as where P (r) (or Q(r)) is the large (or small) component radial function, and χ P/Q jl,jz is the corresponding normalized spin-angular function and is an eigenfunction of the j 2 , j z , l 2 and s 2 operators. The radial functions are expressed as a linear combination of Gaussian type orbitals (GTOs) such as and where C L η and C S η are the expansion coefficients (over which the variation is performed) over a N b number of GTOs, and N L and N S are the normalization constants for the large and small components, respectively. The GTOs describing the large and small component radial functions are given by [49,50] g L η (r) = r l e −αηr 2 and respectively, for the relativistic quantum number κ. In Eqs. (46) and (47), we have unknowns as C L η , C S η , and α η . The C L η and C S η coefficients depend upon the choice of α η parameters and N b . Also, a suitable choice of an appropriate set of α η values can describe the completeness of the space by a finite size of basis functions in a discretized manner. This is also a practical requirement to carry out the calculations. Thus, it is necessary to find out only the C L η and C S η coefficients to describe both P (r) and Q(r), and consequently |φ(r) . In fact, one can find optimized GTOs for molecular calculations, which make use of contracted functions to describe the large space of basis functions with minimum computational effort [51]. However, these functions are not suitable for describing atomic orbitals in the spherical coordinate system. In order to address this, we use reduced matrix elements for calculating atomic properties by which we simply avoid dependency of j z components of the orbitals explicitly, hence managing to include a much larger size active space in the calculations using many-body methods. Nonetheless, we solve the Roothan's equation [52] in the relativistic framework to obtain these coefficients to construct the single particle orbitals in the DF method. To choose the α η parameters conveniently, they are defined according to the even tempering condition [53], which treats α η = α 0 β η−1 for two arbitrary parameters α 0 and β. It is a challenge to search for a suitable set of α 0 , β and N b that can aptly describe the DF orbitals in atomic systems. Finding an optimized set of basis functions to describe single particle orbitals can minimize the error due to the finitude of the basis set chosen and thereby improve the accuracy of the calculations. It is, however, not possible to do so by choosing the above parameters manually. Therefore, it is pertinent to find a scheme wherein we can find the optimal values for these parameters such that they can produce the DF orbitals with a high quality, meaning that the properties calculated from them should be physically more meaningful. In order to achieve this goal, we borrow the concept of a Loss Function L [54] which is an essential ingredient in optimization schemes used in Machine Learning and elsewhere. This necessitates the need for a reference data (i.e. for natural orbitals) with which we can compare our candidate data. In our case, the reference data are taken from the numerical solutions of the DF equations as solved by the GRASP-2K package [55]. We then choose the mean squared error (MSE) loss which provides a quantitative estimate of the closeness of our candidate data to the reference data. The MSE loss is given by [54,56] where N t is the total size of the basis set, y i are the reference values andŷ i are the candidate values. In our case, the reference data consists of the large and small components of the bound DF orbitals of the considered atom. IV. RESULTS & DISCUSSION In order to obtain the single particle orbitals using the optimised α 0 and β values, we compute the net MSE loss as a sum of individual losses due to the large (L large ) and small (L small ) components of radial function as L = W L L large + W S L small with the respective weight factors W L and W S . Since accurate calculations of polarizabilities mostly depend on the large component of the single particle wave function, we take a larger value for W L than W S . Again, optimizing the smaller radial components are very sensitive to numerical accuracy owing to their drastically smaller magnitudes. From this point of view, we consider L = 0.8L large + 0.2L small , and the initial values for the parameters are chosen as α 0 = 0.0009 and β = 2.15. We perform a grid-search for the local optima for a given range and step size in a region of interest. We show a plot showing the MSE loss for various values of α 0 and β values in Fig. 1. This gives us the local optimal values for the basis parameters as α 0 = 0.0209 and β = 2.07, using which we have performed the rest of the calculations. In Table I, we present the results for α d and α q from our RNCCSD method along with results from our DF, MBPT(2), MBPT(3), RPA and RCCSD methods. If we compare the results from all these methods apart from the RNCCSD method with that are reported in Ref. [15], we find that the results are slightly different at the given level of approximation. In Ref. [15] contributions from partial triple excitations were considered in the RCCSD calculation (CCSD p T) method, but the main reason for discrepancy in the result from the present work is owing to the use of optimized basis functions. A slightly large difference is seen at the MBPT(3) method because of consideration of contributions from a few additional Goldstone diagrams here which were not included in Ref. [15]. The RNCCSD results are given for the first time. The trends of the α d and α q values through different manybody methods are shown pictorially in Fig. 2 after normalizing with their respective DF values. As can be seen from the above table and from the figure, the difference between the DF and RNCCSD values for α d is small but there is a relatively large difference for the α q value from these methods. This implies that the electron correlation trends between both the properties are different. In fact, results from the lower-order methods show that the α d and α q values increase in the MBPT(2) method compared to the DF values, while their magnitudes reduce slightly in the MBPT(3) method. As it was mentioned earlier, the MBPT(2) method contains the lowest-order RPA correlation terms while the MBPT(3) method introduces the lowest-order non-RPA correlation terms. Thus, the above trends in the results from the MBPT(2) and MBPT(3) methods suggest that there are cancellations in the correlation contributions arising through the RPA and non-RPA types of correlations. This is further evident from the RPA and RCCSD results. The RPA, which accounts for correlation contributions due to all-order core-polarization effects, give very large values for both α d and α q . However, the RCCSD results are close to the MBPT(3) values. This means that the core-polarization effects enhance the magnitudes of both the polarizabilities while other correlation effects contribute with opposite signs at the all-order perturbation level. We also observe that these cancellations are slightly larger (in percentage) for α q than α d . Comparing results from the TABLE II. Comparison of contributions to α d (in a.u.) from various terms of the RCCSD and RNCCSD methods. We also compare the corresponding contributions from previously reported calculations using the RCCSD method. Here, Norm represents the difference between the contributions after and before normalizing the wave function with a normalization factor, NA stands for not applicable and Nonlin corresponds to the contributions coming from nonlinear terms. These abbreviations are followed in the remainder of this work. RCC method RNCC method Term Ref. [17] Ref. [15] Ours RCCSD and RNCCSD methods, the differences in α d and α q from both the methods are found to be about 3% and 1% respectively. In the above table, we have also given B values but only from the DF, RCCSD and RNCCSD methods. The reason for not giving results from the other lower methods is that the theoretical evaluation of B depends on the first-order wave functions due to D and Q, so by analysing results for α d and α q using lower-order methods propagation of electron correlation effects from lower-order to all-order methods can be understood. In order to understand the differences between the RNCCSD and RCCSD values better, we give results from individual terms of these methods for α d , α q and B in Tables II, III and IV, respectively. In Table II, we also give the corresponding contributions to α d from the RCCSD method that were reported previously in Refs. [15,17]. It can be seen from the term-by-term comparison between the RCCSD and RNCCSD results from the present calculations that contributions arising through the complex conjugate (c.c.) terms and the counter RNCC terms are quite different. Comparing values of individual RCC terms among the earlier calculations [15,17] and ours, we find the trends from different terms differ. There is a similarity in the trends between the present work and Ref. [15] as the implementation procedures of the RCC method is same in these calculations but the basis functions used in both the cases are different. We have used a much larger basis set of functions with 40, 39, 38, 37, 36, and 35 GTOs for the s, p, d, f , g and h orbitals respectively, whereas only 35 GTOs were used for each symmetry up to g orbitals in Ref. [15]. Furthermore, we have optimized the the GTO parameters by adopting a Machine Learning based optimization technique this time as mentioned earlier. It should be noted that calculations in Ref. [17] and in the present work are carried out using orbitals up to h-symmetry and same levels of approximations are considered in the RCC theory, but the implementation procedures are different in these works. Now comparing the correlation trends through the individual RCC terms of Ref. [17] with our calculation, we find the difference in the result from DT (along with its c.c. term) is small but they differ substantially among other RCC terms. We notice that contribution arising through the normalization of the wave function (quoted as 'Norm' in Table II) in Ref. [17] is quite large. In fact, it is larger than the difference between our DF and final RCC results (i.e. the net correlation contributions). In this view, we feel that by implementing the RCC theory in which only the connected terms are retained (also Norm factor does not appear) in Eq. (25) is more credible. Nonetheless, our RNCC theory takes care of this normalization factor in a natural manner. Unlike α d calculations, α q value of Zn was not evaluated using the linear response RCC theory earlier. Therefore, we could not make a comparative study between the contributions from our RCC terms with any earlier study in Table III but show only the comparison of contributions from various terms of the RCCSD and RNCCSD methods. As was mentioned earlier, the ket state in the RNCC theory is the same as the one in RCC theory. Thus, the differences in results between the RCCSD and RNCCSD methods are due to different contributions arising through various de-excitation operators in both the methods. From the comparison of contributions from in- dividual terms in the above table it is clear that the amplitudes of the de-excitation operators in the RNCCSD method are lower in magnitudes than their corresponding operators in the RCCSD method. These trends from individual terms are almost similar during the evaluation of both α d and α q . We also find that certain terms which contribute finitely in the RCCSD method, do not contribute in the RNCCSD method as they cannot give rise connected Goldstone diagrams. This is how the lower contributions arising through the de-excitation operators of the RNCCSD method get compensated with the contributions from the extra terms of the RCCSD method. We now turn to presenting the B values from the RCCSD and RNCCSD methods. Compared to α d and α q , very few theoretical studies of B have been carried out in atomic systems and mostly they have been reported using the FF approach. Inferring their experimental results are extremely challenging, thus accurate evaluation of B is quite interesting to understand roles of various correlation effects associated in its evaluation. Using the FF approach it is possible to achieve the final value of B at a given level of approximation in the many-body method, however, a linear response approach could demonstrate underlying roles of different electron correlation effects in the calculation of B through various physical interactions explicitly. Since B is evaluated using the first-order perturbed wave functions that are used to estimate the α d and α q values, the electron correlation trends of B can be somewhat guessed from the earlier analyses of α d and α q results at different levels of approximations in the many-body methods, but the additional inter-correlation among the dipole and quadrupole operators in the evaluation of B may offer quite a different picture. To fathom this, we make a comparative analysis of individual contributions to B values from individual terms of the RCCSD and RNCCSD methods in Table IV and its c.c. term have an opposite sign compared to other terms. In the RNCCSD method, the contributions from the counterparts of the RCCSD method follow similar trends. As can be noticed, all terms of the RNCCSD method are distinctly different from those of the RCCSD method in contrast to the cases of α d and α q , where half of the RNCCSD contributions were arising from the RCCSD terms. Analogous to the determination of α d and α q , we also find that there are several terms which do not contribute to the B value in the RNCCSD method whose counterpart terms in the RCCSD method do give finite contributions. This is owing to the terminating series that appear in the expression for the evaluation of B in the RNCCSD method against the non-terminating series of the RCCSD method. Also, it is seen that contribution from the Λ in the RNCCSD method. This may be owing to larger magnitude of the single excitation RCC operator due to the dipole operator. Our final recommended values from the RNCCSD method along with the previously calculated and experimental results for α d , α q and B are given in Table V. The estimated uncertainties from our calculations are quoted beside the recommended values. These uncertainties account for the extrapolated contributions from the high-lying basis functions that are not included in the many-body methods, the neglected contributions from the higher-order Breit and QED interactions and from the neglected higher-level excitations. Errors due to the extrapolated basis functions and relativistic effects are analysed by employing the MBPT and RPA methods, while uncertainties due to the higher-level excitations are estimated by analysing contributions from the dominant triple excitations in the perturbative approach. Individual contributions from various sources to the above quantities are listed in Table VI and the net uncertainty to the final value is given by adding all the contributions in quadrature. From these analyses, we recommend the values for α d to be 38.99(31) a.u, for α q to be 314(4) a.u and for B to be −2195(50) a.u.. We have also listed the previously reported experimental and calculated values of α d , α q and B in the above table. Results from the CC and RCC methods are given from both the CCSD method and the CCSD method with contributions from partial triple excitations (CCSD(T) method) along with their relativistic versions. The differences in the results from the CCSD and CCSD(T) methods can indicate importance of the neglected contributions from the triple excitations. The experimental value of α d listed in Table V was measured by using Michelson twin interferometer technique [9]. Later, this value has been revised by fitting the data using better numerical analyses [57]. Our recommended value from the RNCCSD method agrees quite well with both the values. The latest calculation of α d employs a sum-over-states approach, by combining only a few E1 matrix elements from the multiconfiguration Dirac-Fock (MCDF) method, experimental energies and the rest of the contributions are estimated using lower-order methods. This shows poor agreement with the experimental result [58]. The calculations reported in Refs. [15,17] are equivalent to our RCCSD method, while calculations reported in Refs. [9,13,14] are based on the FF approach using non-relativistic CC method. There seems to be an overall good agreement among all these calculations. It is worth mentioning that the recommended values given by the non-relativistic CC calculations of α d in Refs. [9,13,14] have used scaled values in order to quote more accurate values, but the actual calculations give very different values. Nonetheless, agreement of the previously reported accurate values of TABLE VI. Estimated uncertainties to α d , αq and B values (in a.u.) from basis extrapolation (given as "Basis"), neglected Breit interaction (quited "Breit") and lower-order QED corrections (given as "QED") are listed. α d for Zn with our RNCCSD result suggests that our estimated α q and B values using this method are reliable. There is one more calculation of α q and B reported using the non-relativistic CC method in the FF approach [9]. As mentioned above, the final recommended values from these references are scaled values and the actual calculated values are again very different. For example, the CCSD results for α q and B are quoted in the above reference as 360.16 a.u. and −2940 a.u. respectively. In the CCSD(T) method, the α q value was modified to 351.24 a.u. while the B value was modified to −2780 a.u. [9]. After scaling the result, the recommended values for α q and B were given as 324.8(16.2) a.u. and −2370(240) a.u. respectively. Our RNCCSD results are obtained in the ab initio framework and they account for relativistic effects. It is, however, interesting to see that our results validate the recommended values that were reported in Ref. [9]. It can be further noted that our RCCSD value for B given in Table I differs substantially from the RNCCSD result. Therefore, good agreements of our RNCCSD results with the previously recommended values of α d , α q and B suggests that approximated RNCC method is more reliable compared to the approximated RCC method for the determination of the above quantities. V. CONCLUSION We have employed normal coupled-cluster theory in the relativistic framework to determine the dipole, quadrupole and dipole-quadrupole interaction polarizabilities of the zinc atom. By considering the singles and doubles excitation approximation and estimating uncertainties from the neglected contributions, very accurate values for the above quantities are reported. We have also given values from other methods including ordinary relativistic coupled-cluster theory from our calculations. It was found that the dipole and quadrupole polarizabilities from the normal and ordinary coupled-cluster theory agreed quite well, but there was a large difference between the dipole-quadrupole interaction polarizability. We also compared our results with the earlier recommended values from various calculations and observe that our results from the relativistic normal coupled-cluster theory match better with those values than the results obtained using the ordinary coupled-cluster theory at the singles and doubles approximation. ACKNOWLEDGEMENT The computations reported in the present work were carried out using the Vikram-100 HPC cluster of the Physical Research Laboratory (PRL), Ahmedabad, Gujarat, India.
Characterization of the biofilm phenotype of a Listeria monocytogenes mutant deficient in agr peptide sensing Abstract Listeria monocytogenes is a food‐borne human pathogen and a serious concern in food production and preservation. Previous studies have shown that biofilm formation of L. monocytogenes and presence of extracellular DNA (eDNA) in the biofilm matrix varies with environmental conditions and may involve agr peptide sensing. Experiments in normal and diluted (hypoosmotic) complex media at different temperatures revealed reduced biofilm formation of L. monocytogenes EGD‐e ΔagrD, a mutant deficient in agr peptide sensing, specifically in diluted Brain Heart Infusion at 25°C. This defect was not related to reduced sensitivity to DNase treatment suggesting sufficient levels of eDNA. Re‐analysis of a previously published transcriptional profiling indicated that a total of 132 stress‐related genes, that is 78.6% of the SigB‐dependent stress regulon, are differentially expressed in the ΔagrD mutant. Additionally, a number of genes involved in flagellar motility and a large number of other surface proteins including internalins, peptidoglycan binding and cell wall modifying proteins showed agr‐dependent gene expression. However, survival of the ΔagrD mutant in hypoosmotic conditions or following exposure to high hydrostatic pressure was comparable to the wild type. Also, flagellar motility and surface hydrophobicity were not affected. However, the ΔagrD mutant displayed a significantly reduced viability upon challenge with lysozyme. These results suggest that the biofilm phenotype of the ΔagrD mutant is not a consequence of reduced resistance to hypoosmotic or high pressure stress, motility or surface hydrophobicity. Instead, agr peptide sensing seems to be required for proper regulation of biosynthesis, structure and function of the cell envelope, adhesion to the substratum, and/or interaction of bacteria within a biofilm. | INTRODUC TI ON Listeria monocytogenes is a saprophytic soil organism that is widespread in nature (Vivant, Garmyn, & Piveteau, 2013) and frequently found in food processing environments posing a threat to the food chain Muhterem-Uyar et al., 2015;NicAogáin & O'Byrne, 2016). In healthy individuals, food-borne infections with L. monocytogenes result in mild gastroenteritis or remain completely asymptomatic. However, in at-risk groups such as immunocompromised persons, elderly people and pregnant women, L. monocytogenes may cause life-threatening disease (Allerberger & Wagner, 2010;Vázquez-Boland et al., 2001). Two characteristics that make L. monocytogenes a major concern in food processing and sanitation of the respective production lines are the ability to form surface-attached communities (also referred to as biofilm formation) and an extremely high tolerance to a wide range of environmental conditions and stresses (Ferreira, Wiedmann, Teixeira, & Stasiewicz, 2014;NicAogáin & O'Byrne, 2016). Following initial adhesion, L. monocytogenes is able to form surface-attached communities (Carpentier & Cerf, 2011;Renier, Hébraud, & Desvaux, 2011;da Silva & De Martinis, 2013). The population density in these communities is 1-2 orders of magnitude lower than that observed for surface-attached communities of other bacteria (da Silva & De Martinis, 2013). Compared to other bacteria, biofilm formation of L. monocytogenes is not as pronounced, but may be enhanced by precolonization of surfaces by other bacteria such as Pseudomonas putida and Flavobacterium sp., probably involving the extracellular polymeric substances (EPS) produced by these bacteria (Giaouris et al., 2015). By contrast, precolonization of surfaces with, for example, Pseudomonas fragi and Serratia ssp. reduced biofilm formation of L. monocytogenes. There are conflicting results regarding the production of EPS by L. monocytogenes. Some studies conclude that L. monocytogenes biofilms generally lack EPS (Renier et al., 2011). By contrast, a recent study could show that EPS production by L. monocytogenes can be induced by elevated levels of the second messenger cyclic di-GMP and the genetic locus for EPS production was identified (Chen et al., 2014). This leaves room for interpretation as to whether or not these communities are biofilms according to the strict definition, which requires the communities to be embedded into a self-produced matrix of extracellular polymeric substances (Flemming & Wingender, 2010). Nevertheless, several studies have provided evidence for three-dimensional structures described as honey-comb or knitted chains and the presence of extracellular DNA (eDNA) and exopolysaccharides (Borucki, Peppin, White, Loge, & Call, 2003;Guilbaud, Piveteau, Desvaux, Brisse, & Briandet, 2015;Harmsen, Lappann, Knøchel, & Molin, 2010;Rieu et al., 2008;Zetzmann et al., 2015). Thus, it seems reasonable to consider surface attached communities of L. monocytogenes as biofilms. The aim of this study was to investigate the biofilm phenotype of a L. monocytogenes mutant deficient in agr peptide sensing. | Bacterial strains and growth conditions In this study, L. monocytogenes strains EGD-e, its isogenic mutant EGD-e ΔagrD, and the genetically complemented strain EGD-e ΔagrD::pIMK2agrD were used. All strains have been described previously (Riedel et al., 2009). Bacteria were cultivated routinely in brain heart infusion broth (BHI, Oxoid, Altrincham, Cheshire, England) or 10-fold diluted BHI (0.1BHI) at 25 or 37°C. Precultures for functional assays were prepared by inoculation of a single colony from a fresh agar plate into 10 ml BHI and incubated aerobically on a rotary shaker (200 rpm) at 25°C overnight (o/N, i.e., approx. 16 hr). | Quantification of surface-attached biomass To quantify surface-attached biomass, classical crystal violet assays were performed in 96-well microtiter plates as described previously (Zetzmann et al., 2015). Where indicated, 1 unit (U) of DNase I (Thermo Scientific, Waltham, MA) or 1 mg/ml pronase (Sigma-Aldrich, Darmstadt, Germany) was added to the wells directly after inoculation. Plates were incubated at 25°C or 37°C for 24 hr. For analysis, biofilms were washed gently twice with phosphate-buffered saline (PBS) followed by staining with 0.1% (v/v) crystal violet solution (Merck, Darmstadt, Germany) for 30 min. After three further washings with PBS, crystal violet was released from biofilms by addition of 100 µl 96% (v/v) ethanol and incubated for 10 min. Biofilm biomass was quantified by measuring absorbance at 562 nm (Abs 562 nm ) with background correction, that is, crystal violet staining in wells incubated with sterile media under the same conditions. | Membrane and cell wall stress assays To assess the effects of reduced osmolarity in 0.1BHI on viability of bacteria, aliquots of the preculture used for biofilm assays were diluted 1:100 in either 0.1BHI or demineralized H 2 O (dH 2 O) and viable cell counts were determined as colony-forming units per ml (CFU/ ml) by spot-plating. For this purpose, 10 µl aliquots of 10-fold serial dilutions were plated in triplicate onto BHI agar and the colonies of an appropriate dilution were counted to calculate CFU/ml. The effect of lysozyme treatment was analyzed in a similar assay except that bacteria were inoculated from a preculture into 0.1BHI, grown at 25°C to exponential growth phase (OD 600nm = 0.15-0.2), harvested by centrifugation and resuspended in 0.1BHI containing incubated in the presence of lysozyme at 25°C for the indicated time and log-reduction was calculated relative to CFU/ml at t = 0 min of an untreated control, that is, an aliquot resuspended in 0.1BHI without lysozyme. | High hydrostatic pressure treatments For high hydrostatic pressure (HPP) experiments, a single colony from a fresh BHI agar plate was inoculated into BHI broth and grown for 12 hr at 37°C. This preculture was diluted to an OD 600nm of 0.05 in 0.1BHI and grown for 1.5-2 hr to exponential growth phase (i.e., OD 600nm of 0.15 ± 0.02). At this stage, samples of 2 ml were loaded in Eppendorf tubes and sealed by carefully avoiding any air bubbles inside. Pressure treatments were conducted in a multivessel (four vessels of 100 ml) high-pressure equipment (Resato, Roden, the Netherlands) at 20 ± 0.5°C. As a pressure transmitting fluid a mixture of water and propylene glycol fluid (TR15, Resato) was used. Pressure treatments were performed at 200, 300, and 400 MPa with a compression rate of 250 MPa/min and 60 s after the comeup time were considered the equilibration time necessary for each treatment. Samples were maintained for an additional 60 s at the established pressure followed by decompression of the vessels in less than 5 s. Treated samples were removed from the high-pressure vessels, and immediately afterwards, viable cell counts (CFU/ml) were quantified by spot-plating as described above. | Motility assays To assess motility of bacteria, precultures were prepared as described above in 0.1BHI at 25°C o/N. Of these precultures, soft agar of the same medium (0.1BHI, 0.2% agar) were inoculated by dipping an inoculation needle in the preculture and briefly stabbing onto the surface of the soft agar plate. After incubation for 24 h at 25°C, plates were imaged using a standard digital camera and the size of the zone of growth around the spot of inoculation was measured. | Microbial adhesion to hydrocarbons Surface hydrophobicity of all strains was evaluated using a standard assay to quantify microbial adhesion to hydrocarbons (MATH assay) (Rosenberg, 2006). Briefly, bacteria were grown in 0.1BHI at 25°C o/N, washed once in PBS and adjusted to an OD 600nm of 0.1 in PBS (OD1). Two milliliters of this suspension were mixed with 0.4 ml xylene and vortexed for 2 min. After separation of the phases, OD 600nm was again measured in the aqueous phase (OD2). Hydrophobicity (H) was then calculated as % = (OD1−OD2) OD1 × 100 . | Statistical analysis Statistical analysis was performed by Student's t test or analysis of variance (ANOVA) with Dunnett's posttest to adjust P-values for multiple comparisons using GraphPad Prism (version 6). Differences were considered significant at p < 0.05. | RE SULTS AND D ISCUSS I ON Recently, we were able to show that biomass and presence of eDNA in biofilms of L. monocytogenes EGD-e vary with growth conditions (Zetzmann et al., 2015). Additionally, a L. monocytogenes EGD-e ΔagrD deletion mutant showed reduced levels of surface-attached biomass in 0.1BHI at room temperature (Riedel et al., 2009) F I G U R E 1 Biofilm formation (a) and DNAseI sensitivity of biofilms (b) of L. monocytogenes EGD-e WT (W), EGD-e ΔagrD (Δ), and EGD-e ΔagrD::pIMK2agrD (C). Biofilms were grown in BHI or 0.1BHI at 25 or 37°C in the absence (a) or presence of DNaseI (b; 0.1BHI at 25°C only). Biofilm biomass was quantified by crystal violet staining and measuring absorbance at 562 nm (Abs 562nm ) after 24 hr of growth in polystyrene microtiter plates. All values are mean ± standard deviation of three independent experiments. Statistical analysis was performed by ANOVA with Dunnett's multiple comparisons test with L. monocytogenes EGD-e WT set as control condition (a) or Student's t test comparing biofilm of each strain in the presence and absence of DNase I (b; *p < 0.05; **p < 0.01; ***p < 0.001) followed by 0.1BHI at 25°C and lowest biofilm biomass was formed in 0.1BHI at 37°C. Interestingly, the ΔagrD mutant showed reduced biofilm formation only in 0.1BHI at 25°C. For all other conditions, no difference was observed between the three strains. Thus, agr peptide sensing is required for proper regulation of biofilm formation under specific conditions, that is, in 0.1BHI at 25°C. Interestingly, these are the conditions under which biofilms of the WT strain showed increased abundance of eDNA and DNase I sensitivity (Zetzmann et al., 2015). This prompted us to test whether loss of agr peptide signaling is associated with altered sensitivity toward DNase I treatment (Figure 1b). However, biofilm formation of the ΔagrD mutant was reduced by DNase I to a similar extent as observed for the WT (and complemented strain) at 25°C in 0.1BHI indicating that eDNA is present in these communities and lack of eDNA is not responsible for the observed phenotype of L. monocytogenes EGD-e ΔagrD. Since the conditions that produce the phenotype of ΔagrD mutant may cause osmotic stress due to the low nutrient and ion concentration in dH 2 O-diluted BHI (0.1BHI). In a previous study, a deletion mutant in the AgrC sensor histidine kinase of the agr system displayed increased sensitivity to high concentrations of salt (Pöntinen, Lindström, Skurnik, & Korkeala, 2017). Thus, we hypothesized that a reduced resistance to osmotic stress may lead to increased lysis of bacteria and, consequently, reduced surface-attached biomass. In order to get a first indication whether deletion of agrD results in reduced stress resistance, we re-analyzed a previously published transcriptomic data set comparing L. monocytogenes EGD-e ΔagrD with its parental WT strain (Riedel et al., 2009). The conditions of biofilm formation (0.1BHI, 25°C) and the transcriptomic analysis (BHI, 37°C) are different. Nevertheless, we reasoned that the transcriptional data would provide first indications as to whether or not stress related genes are affected by the lack in agr peptide signaling and any stress-related phenotype would be even more evident under for example, hypoosmotic stress (i.e., 0.1BHI). We therefore compared the differentially expressed genes to the regulon of the alternative sigma factor σ B , that is, the major regulator of the general stress response in many gram-positive bacteria including L. monocytogenes (Chaturongakul, Raengpradub, Wiedmann, & Boor, 2008;Kazmierczak, Mithoe, Boor, & Wiedmann, 2003;van Schaik & Abee, 2005). In L. monocytogenes, the σ B regulon comprises 168 genes that are positively regulated by σ B . Comparison with the 715 genes differentially expressed in L. monocytogenes EGD-e ΔagrD revealed an overlap of 132 genes, which is 78.6% of the σ B regulon and 18.5% of the agr regulated genes (Table 1 and The four genes of the dltABCD operon are required for D-alanine esterification of teichoic acids in the cell wall of L. monocytogenes, which is involved in adhesion and virulence (Abachin et al., 2002), and a ΔdltABCD mutant showed impaired biofilm formation (Alonso et al., 2014). The entire dlt operon was differentially expressed in L. monocytogenes EGD-e ΔagrD. However, since its expression was increased in the ΔagrD mutant compared to the WT and it was thus ruled out as being responsible for the biofilm phenotype of the mutant. Besides fliQ and motA, three other genes (flaA, fliD, and fliI) involved in flagellar motility and its regulation were shown to impact on biofilm formation by the transposon mutant screen (Alonso et al., 2014). Flagellar motility has previously been shown to play a role in adhesion and biofilm formation of L. monocytogenes (Di Bonaventura et al., 2008;Lemon, Higgins, & Kolter, 2007;Todhanakasem & Young, 2008). Interestingly, 16 of the 44 genes lmo_0675-lmo_0718 of L. monocytogenes EGD-e that encode for the flagellar apparatus were differentially regulated in the ΔagrD mutant (Supplementary File S1). Although these genes show divergent expression (i.e., some are up-and others down-regulated) in the mutant, we performed motility assays to test if this strain shows altered expression or functionality of flagella. However, no difference in swimming motility was observed between the ΔagrD mutant and the WT or complemented strain at 25°C on 0.1BHI plates containing 0.2% (w/v) agar (Figure 3). In the absence of other indications about the possible reason for the phenotype of L. monocytogenes EGD-e ΔagrD, we further analyzed the data set of genes differentially expressed in this strain. We reasoned that impaired attachment to the substratum of the mutant and interaction with other bacteria might be involved in the observed phenotype. These processes are mediated by proteins that are either secreted into the environment (exoproteins) or attached to the bacterial cell envelope. In fact, presence of pronase completely abolished biofilm formation of all three tested strains (Appendix Figure A1). F I G U R E 2 Resistance of L. monocytogenes EGD-e WT (W), EGD-e ΔagrD (Δ), and EGD-e ΔagrD::pIMK2agrD (C) exposed to hypoosmotic conditions (a) or high hydrostatic pressure (b). (a) Bacteria were transferred to 0.1BHI or demineralized H 2 O (dH 2 O) and viability was assessed after 60 min by determining CFU/ml. (b) Bacteria from exponential growth phase were resuspended in 0.1BHI and subjected to HPP at the indicated pressure. Changes in viability are reported as Δlog 10 (CFU/ml) compared to bacterial counts before treatment. Values are mean ± standard deviation of three independent experiments. Statistical analysis was performed by ANOVA with Dunnett's multiple comparisons test with L. monocytogenes EGD-e WT set as control condition F I G U R E 3 Motility of L. monocytogenes EGD-e WT (W), EGD-e ΔagrD (Δ), and EGD-e ΔagrD::pIMK2agrD (C). Representative images and quantification of the diameter of the zone of growth around the inoculation spot of the three strains grown on 0.1BHI soft agar (0.2%). Values are mean ± standard deviation of three experiments with independent precultures. For each preculture and strain at least three growth zones were measured. Statistical analysis was performed by ANOVA with Dunnett's multiple comparisons test with L. monocytogenes EGD-e WT set as control condition Thus, we retrieved the cellular localization of all agr-regulated proteins as annotated on the Listeriomics web page (https://listeriomics. pasteur.fr/Listeriomics/#bacnet.Listeria), which is based on an extensive in silico analysis (Renier, Micheau, Talon, Hébraud, & Desvaux, 2012). A total of 995 genes (34.8%) in the genome and 293 genes (i.e., 41.0%) of the agr-regulated genes of L. monocytogenes EGD-e encode for (predicted) extracytoplasmatic proteins (Table 2). Amongst the 715 agr-dependent genes, 19 (2.7%) encode for exoproteins (i.e., proteins secreted and released into the extracellular environment), 25 (3.5%) for lipoproteins, 27 (3.8%) for cell wall proteins, 187 (26.2%) for integral membrane proteins, and 35 (4.9%) for cytoproteins (i.e., proteins predicted to be secreted via non-classical pathways). None of the groups seems to be markedly overrepresented in the agr-regulated genes. Nevertheless, the percentages of the agr-regulated genes within these groups (except for exoproteins) were comparable or higher compared to the percentage of the respective group on the genome level suggesting that the agr system is involved in the regulation of biosynthesis, structure, and function of the cell envelope. Of note, the agr-regulated genes included 10 genes for internalins or internalin-like proteins, 15 genes for peptidoglycan-associated proteins, and a number of genes for penicillin binding proteins and proteins with (know or presumable) cell wall-hydrolyzing activity (Supplementary File 1). This indicates that the ΔagrD system is involved in regulation of cell envelope proteins that may be relevant for attachment to and interaction with abiotic surfaces as well as amongst bacterial cells. Altered surface protein profiles may result in changes in the physicochemical properties of the bacterial surface such as charge and hydrophobicity, which were shown to play a role in adhesion and biofilm formation of L. monocytogenes (Di Bonaventura et al., 2008;Takahashi, Suda, Tanaka, & Kimura, 2010). MATH assays performed in xylene revealed that L. monocytogenes EGD-e ΔagrD did not differ in surface hydrophobicity compared to the WT or complemented strain when bacteria were grown in 0.1BHI at 25°C (Figure 4a). Similar results were obtained, when octadecene was used as solvent (data not shown). Another functional consequence of an altered cell wall composition could be changes in the resistance to cell wall damage. To test this possibility, the resistance of L. monocytogenes EGD-e ΔagrD to treatment with 5 µg/ml lysozyme was tested in 0.1BHI (Figure 4b). Under these conditions, viability of the WT and complemented strain decreased by about 0.5 logs during the first 120 min of lysozyme challenge. More importantly, the sensitivity of the ΔagrD mutant was significantly increased at any time point measured and viable counts were reduced by about 2 logs after 120 min. Collectively, the obtained results suggest that the biofilm phenotype of L. monocytogenes EGD-e ΔagrD is not a general feature of this mutant but is only relevant under specific conditions. The experimental conditions under which the mutant displays reduced biofilm formation include nutrient limitation and reduced osmolarity. These are the conditions similar to those encountered in difficult to access reservoirs in food processing plants (Carpentier & Cerf, 2011;Ferreira et al., 2014;da Silva & De Martinis, 2013). Thus, the agr system may be important for adaptation and survival of L. monocytogenes at such sites. The observed phenotype of the ΔagrD mutant is not associated with differences in eDNA abundance, increased lysis in hypoosmotic conditions, flagellar motility, or surface hydrophobicity. It is more likely, that reduced biofilm formation of L. monocytogenes EGD-e ΔagrD is the result of an altered cell envelope proteome, TA B L E 2 Number and percentage of different groups of genes encoding extracytoplasmatic proteins amongst the agr-regulated genes of L. monocytogenes F I G U R E 4 (a) Surface hydrophobicity and (b) resistance of L. monocytogenes EGD-e WT (W), EGD-e ΔagrD (Δ), and EGD-e ΔagrD::pIMK2agrD (C) exposed to lysozyme (b). (a) Surface hydrophobicity (H [%]) was evaluated using MATH assay. (b) Bacteria from exponential growth phase were resuspended in 0.1BHI containing 5 µg/ml lysozyme and incubated for the indicated time. Changes in viability are reported as Δlog 10 (CFU/ ml) compared to bacterial counts before treatment. Values are mean ± standard deviation of three independent experiments. Statistical analysis was performed by ANOVA with Dunnett's multiple comparisons test with L. monocytogenes EGD-e WT set as control condition (***p < 0.001) which manifests in reduced adhesion to the abiotic surface and/or to neighboring bacteria or the biofilm matrix and, in consequence, increased dispersal. The previously published transcriptional data (Riedel et al., 2009) provided first indications for genes and their products possibly involved in these phenotypes. In further studies, the contribution of these factors to the observed phenotype and their expression levels need to be investigated for example, by qPCR and experiments using knock-out mutants of the respective genes. ACK N OWLED G EM ENTS This study was partially funded within the ERA-IB2 consortium CO N FLI C T O F I NTE R E S T S The authors declare no conflict of interests. AUTH O R S CO NTR I B UTI O N CU conceived the study. MZ, FIB, PC, DB, and LGG carried out experiments. PC, DB, AIN, GMS, and CUR analyzed data. DB, AIN, and CUR drafted the manuscript and all authors contributed to preparing the final version of the manuscript. All authors read and approved the final manuscript. E TH I C S S TATEM ENT Not required. DATA ACCE SS I B I LIT Y All relevant data are presented in figures, tables, or in Supplementary File 1. Raw data used for preparation of figures will be made avail- APPENDIX F I G U R E A 1 Sensitivity of biofilms of L. monocytogenes EGD-e WT (W), EGD-e ΔagrD (Δ), and EGD-e ΔagrD::pIMK2agrD (C) to pronase. Biofilms were grown in BHI or 0.1BHI at 25 or 37°C in the absence (black bars) or presence (white bars) of 1 mg/ml pronase. Biofilm biomass was quantified by crystal violet staining and measuring absorbance at 562 nm (Abs 562nm ) after 24 hr of growth in polystyrene microtiter plates. All values are mean ± standard deviation of three independent experiments
Neural oscillations track recovery of consciousness in acute traumatic brain injury patients Abstract Electroencephalography (EEG), easily deployed at the bedside, is an attractive modality for deriving quantitative biomarkers of prognosis and differential diagnosis in severe brain injury and disorders of consciousness (DOC). Prior work by Schiff has identified four dynamic regimes of progressive recovery of consciousness defined by the presence or absence of thalamically‐driven EEG oscillations. These four predefined categories (ABCD model) relate, on a theoretical level, to thalamocortical integrity and, on an empirical level, to behavioral outcome in patients with cardiac arrest coma etiologies. However, whether this theory‐based stratification of patients might be useful as a diagnostic biomarker in DOC and measurably linked to thalamocortical dysfunction remains unknown. In this work, we relate the reemergence of thalamically‐driven EEG oscillations to behavioral recovery from traumatic brain injury (TBI) in a cohort of N = 38 acute patients with moderate‐to‐severe TBI and an average of 1 week of EEG recorded per patient. We analyzed an average of 3.4 hr of EEG per patient, sampled to coincide with 30‐min periods of maximal behavioral arousal. Our work tests and supports the ABCD model, showing that it outperforms a data‐driven clustering approach and may perform equally well compared to a more parsimonious categorization. Additionally, in a subset of patients (N = 11), we correlated EEG findings with functional magnetic resonance imaging (fMRI) connectivity between nodes in the mesocircuit—which has been theoretically implicated by Schiff in DOC—and report a trend‐level relationship that warrants further investigation in larger studies. . Biomarkers of recovery are greatly needed in DOC to inform prognosis and differential diagnosis, as diagnostic error rates in DOC are as high as 40%, largely owing to the challenges posed by behavioral assessments of level of consciousness (Andrews, Murphy, Munday, & Littlewood, 1996;Childs, Mercer, & Childs, 1993;Schnakers et al., 2009). Diagnostic biomarkers are needed in DOC to identify instances of covert consciousness, that is, consciousness that occurs in the absence of behavioral responsiveness (Huang et al., 2018;Monti et al., 2010). Furthermore, patients' prognoses are also frequently inaccurate, with many families often advised to consider withdraw of care despite over two thirds of patients recovering consciousness when DOC results from traumatic brain injury (TBI) (Giacino et al., 2014;Peberdy et al., 2003;Turgeon et al., 2011). However, predicting which patients are likely to recover consciousness in the absence of prognostic biomarkers remains challenging, thus underscoring the need for such biomarkers (Provencio et al., 2020). EEG is an attractive modality for biomarkers of postinjury recovery. As a direct readout of cortical activity, EEG is inexpensive, portable, and easily deployed at the bedside. In particular, EEG may be well suited to test hypotheses concerning the role of functional reafferentation of the cortex during coma recovery, that is, restoration of thalamocortical integrity. One such hypothesis, the mesocircuit model, theorizes that the globus pallidus interna (GPi) is disinhibited following diffuse brain injury, and thus, silences the central thalamus, resulting in functional deafferentation of the cortex (Schiff, 2010(Schiff, , 2016. Thus, recovery from DOC requires restoration of striatal functioning and thalamocortical integrity. The latter may be inferred from EEG, given that cortical oscillations such as theta and alpha are thought to be driven by the thalamus (Hughes & Crunelli, 2005;Lindgren et al., 1999;Liu et al., 2012;Sarnthein & Jeanmonod, 2007; Sarnthein, Morel, Von Stein, & Jeanmonod, 2003;Schreckenberger et al., 2004). As such, the loss and recovery of thalamocortical integrity is visible in noninvasive recordings and motivates the "ABCD" model by Schiff. Based on the assumption that specific cortical oscillations indicate varying levels of thalamocortical integrity, Schiff (2016) has defined four dynamic regimes that build on the mesocircuit model, each detectable with EEG and corresponding to a thalamocortical state that indicates progressive circuit recovery. In particular, this model emphasizes thalamic projections to frontal cortical areas, given the privileged role of central thalamic nuclei in anterior forebrain arousal (Schiff, 2020). These EEG types, labeled A-D (hence, ABCD model) are summarized in Table 1. Later types (C, D) denote more progressive recovery (i.e., are "better") than earlier types, (A, B), which correspond to a quiescent thalamic state. Specifically, A-type EEG spectra (featuring no or only low frequency oscillations) are thought to indicate complete cortical deafferentation on a circuit level and a vegetative state on a behavioral level, whereas B-type spectra (featuring theta oscillations) indicate severe deafferentation and a minimally conscious state. Next, C-type spectra (featuring theta and beta oscillations) may occur when thalamic nuclei fire in burst mode, corresponding to less severe deafferentation and emergence from the minimally conscious state. Finally, D-type spectra (featuring alpha and beta oscillations) indicate an approximately normal EEG, corresponding to tonic firing of thalamic nuclei and a normal capacity for wakeful consciousness. During the progression from A to D, excitatory synaptic background activity, as well as metabolic rates in cortical, central thalamic, and pallidal tissues, are increasingly restored to normal levels (Comanducci et al., 2020). Two recent studies have tested the mesocircuit hypothesis in the context of the ABCD model using EEG. Forgacs et al. (2017) found that EEGs from 44 patients who had lost consciousness following cardiac arrest displayed a progression of EEG patterns consistent with the ABCD model. In particular, EEG patterns indicative of greater circuit-level recovery correlated with better outcomes at hospital discharge. More recently, Alkhachroum et al. (2020) used the ABCD model to examine recovery in DOC patients with largely anoxic etiologies treated with amantadine. The best recorded ABCD type increased (A-D) linearly with the percentage of patients who recovered the ability to follow commands. The foregoing studies offer first evidence for the ABCD model. However, it is unknown how EEG dynamic regimes relate empirically to recovery of the mesocircuit in DOC and whether these EEG types can be used as biomarkers. Deploying the ABCD model as a clinical biomarker will depend crucially on whether the different types can be detected at an acute stage and related to recovery of the mesocircuit. With the exception of three patients (Alkhachroum et al., 2020), the T A B L E 1 EEG types, criteria, and descriptive statistics Note: Theoretical meanings are based on the original ABCD model by (Schiff, 2016). Some EEG observations (10.6%) could not be classified according to the ABCD model due to peak combinations that did not fit any ABCD type (e.g., a peak in beta without an accompanying peak in theta or alpha). ABCD model has never been applied to patients with severe TBI in the acute stage. | EEG classification To classify EEG observations according to ABCD type, we splineinterpolated channel-level power spectral densities (PSDs) to achieve 100 frequency bins per octave. Next, we identified local maxima in PSDs [MATLAB function: findpeaks, min width = 0.1 log 2 (Hz), min prominence = 0.001 log 10 (μV 2 /Hz)] and determined the ABCD type using criteria in Table 1 unclassifiable. We also classified EEG observations according to whether they contained a theta and/or alpha peak (henceforth: θα type) using the mode across channels, given that the long oscillatory periods (>83 ms) of these EEG rhythms are compatible with physiological conduction delays between thalamus and cortex (Swadlow & Waxman, 2012; i.e., thalamocortical communication occurs fully within the excitable phase of the oscillation or one half of the oscillatory period [Fries, 2005]) and are thus especially valuable for inferring thalamocortical integrity. | fMRI data processing Patients in our cohort are prone to high motion so we have implemented several preprocessing measures and exclusion criteria to F I G U R E 1 EEG types and behavioral trajectories. (a) Thirteen EEG channels common to all patients were imported for analysis. Actual channel positions varied from patient to patient to accommodate bone flaps and injuries. (b) EEG observations (green circles) were sampled at timepoints corresponding to local maxima of the Glasgow Coma Scale (GCS, black trace), with a 12-hr buffer in between observations. Purple highlights show times with available EEG data. Time is referenced to the patient's earliest GCS score. (c) ABCD type (color) and θα type ( | Brain parcellation using independent component analysis We performed ICA using the GIFT toolbox (http://mialab.mrn.org/ software/gift/index.html) to parcellate the brain as implemented in Allen et al. (2014). We chose this parcellation approach (Crone et al., 2015;Crone, Lutkenhoff, Vespa, & Monti, 2020) as opposed to standard atlases, because the brains we investigated were severely injured. Thus, it is problematic to assume that parcellation resulting from the average of young and healthy brains adequately represents function in a brain that has been subject to reorganization due to TBI. We defined the cortical regions of interest (ROIs) at an individual level based on the individual functional covariance using groupICA. See Supporting Information Materials and Methods for further details of the parcellation and seed-based connectivity analysis. | Principal components space clustering Having classified EEG observations according to the presence or absence of power spectral peaks, we next asked how the foregoing approach would compare with a data-driven approach. To implement such an approach, we began by averaging power (unnormalized) across channels for each patient and log-scaling. Due to the large number of frequency bins (44), we next applied PCA to this feature space and retained the top two PCs according to variance explained. We then used k-means clustering to identify two clusters in PC space. Only one EEG observation was used per patient for clustering; EEG observations were selected according to the procedure described below under statistical analysis. | Statistical analysis To determine the extent to which EEG spectral peaks predicted patients' behavior, we related acute, longitudinal EEG data to both acute, longitudinal behavioral data (i.e., daily maximum GCS scores) and a single, chronic behavioral datum (i.e., chronic GOSe score). For the former, we used linear mixed models (LMMs) with random intercepts, that is, varying-intercept models. We allowed intercepts but not slopes to vary between patients as we expected patients to have different baselines but did not expect predictors to exert differing levels of influence across patients. Furthermore, given that all models had at least five predictor variables, modeling random slopes for each predictor would result in a cumbersome level of model complexity relative to our sample size. This LMM had the formula where GCS is the daily maximum GCS score, EEG is either an ordinal Note: Our heavily male sample reflects higher risk for TBI in males. "Time to follow up" gives the number of days postinjury when GOSe assessments were performed. Note that GOSe = 1 indicates that the patient was deceased (thus, for Patient 32, the time to follow up was not applicable). Comments give additional details including which analyses, if any, patients were excluded from and why. Abbreviation: N/A, not applicable. of medications. LMMs were fit in MATLAB using the function fitlme. Instances of unclassifiable ABCD type were treated as missing data. Patients with fewer than three usable EEG observations were excluded from this analysis due to an insufficient number of observations for inclusion in LMMs. Additionally, we utilized GCS subscales to infer consciousness in patients and related the same predictors to a binary variable denoting conscious state. Specifically, we inferred the presence of consciousness from a GCS motor score ≥5 or a GCS verbal score ≥4 (see Crone et al., 2020 for further details). We then tested the relationship between predictors and conscious state using generalized LMMs (GLMMs; MATLAB function fitglme) with the logit link function and the formula: where CONSCIOUS is a binary variable denoting the presence or absence of consciousness. For each of the above LMMs and GLMMs, we performed an F-test to evaluate the EEG term. An alternative model that predicts GCS scores and conscious state based on EEG spectral band power, rather than peak combinations, was also assessed. Separate models were fit for the absolute spectral power (unnormalized) and relative power (normalized by the total 1-45 Hz power). Specifically, we fit LMMs with the formula For the second aim (i.e., relating multiple longitudinal EEG observations per patient to a single chronic outcome), we were unable to fit LMMs or GLMMs because they are not compatible with an unbalanced design featuring dynamic/longitudinal predictors with a static outcome variable. Rather than arbitrarily choosing one EEG timepoint for each patient to create a balanced design, we utilized multiple linear regression models with a resampling approach that randomly sampled one EEG observation per patient with replacement for each of 9,999 resamples and constructed an empirical distribution of test statistics. Resamples that yielded invalid combinations (i.e., a rank deficient regression design matrix) were discarded and replaced with a new sampling. We reported the results of the resample that yielded the median t-statistic for the variables EEG, DELTA, THETA, ALPHA, or BETA; for this reason, the number of resamples N was chosen as an odd number such that the middle-most test statistic would be defined. Our initial multiple linear regression model was specified as where GOSe is the chronic GOSe score. For alternative models using spectral power as predictors, we used multiple linear regression models with the formula Next, we compared GCS and GOSe scores between clusters identified using k-means clustering. This clustering approach requires only one EEG observation per patient. Because including all data from each patient in our clustering would have biased clusters toward patients with more observations, we once again utilized a resampling approach with replacement using 9,999 resamples to include only one EEG observation per patient per resample. Invalid resamples were discarded and replaced as described above. For each random resampling, we constructed a feature space using channel-averaged spectral power across all frequency bins, applied PCA, retained the two PCs that accounted for the highest proportion of variance, and identified two clusters in PC space using k-means clustering. For consistent labeling of clusters across resamples, we used one label for all clusters that had a centroid with a PC 1 -coordinate ≥0 and another label for those with a PC 1 -coordinate <0. We then performed multiple linear regression using the model where SCORE is the behavioral measure (GCS, conscious state, or GOSe) and CLUSTER is a binary variable denoting the patient's power space cluster membership. For predicting conscious state (a binary variable), we utilized logistic multiple regression. GCS scores were selected corresponding to the day/time of each patient's randomly resampled EEG observation. The resample yielding the median tstatistic for CLUSTER was then selected for reporting. Note that this resampling procedure was performed separately for GCS and GOSe. Finally, to relate EEG to fMRI connectivity, we correlated the mean proportion of channels with a θα peak with the z-scored fMRI connectivity for each of four fMRI ROI pairings: thalamus-striatum, striatum-globus pallidus, thalamus-prefrontal cortex (PFC), and thalamus-posterior cingulate cortex (PCC). We used θα type rather than ABCD type due to the greater variance and weaker skew of the former (see Section 3). Furthermore, to reduce the number of data points at floor (i.e., all θαÀ) or ceiling (i.e., all θα+), we choose to examine the proportion of EEG channels with a θα peak, rather than the modal (and thus binary yes/no) θα type of each EEG observation. To derive noise-robust estimates for each patient, we averaged this proportion across all EEG observations within 48 hr of fMRI acquisition. Correlations were derived using the Pearson coefficient. We did not include covariates in this analysis for two reasons: (a) EEG observations from both before and after fMRI acquisition were included, and so temporal precedence of predictors could not always be established, and (b) because EEG and fMRI are both intimately related measures of brain activity, their relationship is not confounded by the same variables that were covaried for in other analyses. Outliers were identified and excluded from correlation analysis using a threshold of three scaled median absolute deviations from the median of either variable (peak proportion or fMRI connectivity) using the MATLAB function isoutlier with default parameters. To account for multiple testing, we applied false discover rates (FDR, Benjamini-Hochberg) to correct pvalues (Benjamini & Hochberg, 1995). In all models, we corrected for the four hypothesis tests outlined in the introduction using a Bonferroni correction, yielding a test-wise criterion of α = .0125 (for fMRI connectivity, this was performed in addition to the FDR correction). | RESULTS Following preprocessing and quality control, we retained 320 EEG observations across 38 patients (31 male) with ages ranging from 19 to 84 years (40 ± 17 years, mean ± SD). Patient demographics are summarized in Table 2 and Figure S1. The number of usable EEG observations per patient ranged from 1 to 16 (8.4 ± 4.3, mean ± SD), with 1.8-30 min of usable data per observation (24 ± 7.2, mean ± SD). Thus, despite our modest patient sample size, we analyzed an average of 3.4 hr of data per patient. EEG observations were most commonly categorized as A-type (71.6%), with those remaining categorized as B-type (15.0%), C-type (1.88%), D-type (0.94%), or unclassifiable (10.6%). Separately, 30.9% of observations exhibited a peak in the theta-alpha band in the majority of channels (θα+). Distributions of GCS scores for each type are described in Table 1 and Figure 2. See Figure 3 for examples of each ABCD type and Figure S3 for behavioral trajectories and EEG variables for all patients. | Thalamically-driven oscillations track behavioral recovery Given the very small proportion of EEG observations that were categorized as C-type or D-type, we opted to create a binary variable by grouping types B, C, and D together, thus avoiding outlier effects. To investigate ABCD type in relation to acute behavioral state, we used an LMM with GCS as the dependent variable. We excluded four patients from the model due to having fewer than three observations, plus an additional patient with missing medication data, yielding n = 33. We found that ABCD type (i.e., A vs. B, C, D) significantly predicted GCS (p < 0.0001, F (1,268) = 17.0), with more progressive states (B, C, D) associated with higher GCS scores (t = 4.13); see Table S1 for F-statistics and p-values of the covariates and intercept in this and other models). However, unclassifiable EEG observations, which were omitted from the above model, were significantly more likely to correspond with higher GCS scores than classifiable EEG observations (t = À2.59) as revealed using an LMM with ABCD classifiability (yes/no) as the EEG predictor (p = .010, F (1,296) = 6.71). ABCD type also significantly predicted the presence/absence of consciousness in patients, as inferred from GCS subscales (p = .0037, Next, we fit the same models substituting θα type for ABCD type. Similar to ABCD type, θα type was significantly predictive of both GCS (p = <.0001, F (1,296) = 16.2) and conscious state (p = <.0001, F (1,296) = 15.6), where θα + patients were more likely to have higher GCS scores (t = 4.03) and to be conscious (t = 3.95). Adding θα type as a predictor to the LMM fit using ABCD type only improved prediction of GCS scores before adjusting our α level for the additional hypotheses outlined in the introduction (p = .032, likelihood ratio stat = 4.59); thus, a larger sample might demonstrate an added value for including both predictors in the model. On the other hand, adding ABCD type as a predictor to the GLMM fit using θα type unambiguously improved prediction of conscious state (p <.0001, likelihood ratio stat = 75.4). As a benchmark to compare the above models against, we also predicted GCS and conscious state using spectral power in the delta, theta, alpha, and beta frequency bands. Absolute alpha (p = .0037, F (1,293) = 8.56) and beta (p = 2.9  10 À5 , F (1,293) = 18.1) power significantly predicted GCS, with lower alpha power (t = À2.93) and higher beta power (4.25) corresponding to higher GCS scores after accounting for covariates. Note, however, that alpha power was positively related to GCS in raw correlations that did not control for other predictors (absolute power, r = .29; relative power, r = .065). Adding absolute alpha power to LMMs that predicted GCS using ABCD type (p = .19, log likelihood stat = 1.70) or θα type (p = .23, log likelihood stat = 1.41) did not significantly improve model fit in either case. However, adding absolute beta power to the LMM that predicted GCS using θα type significantly improved model fit (p = .0028, likelihood ratio stat = 8.93), and a trend level improvement was observed when added to the LMM that predicted GCS using ABCD type (p = .019, likelihood ratio stat = 5.52). Using relative power, we again found that alpha (p = .0022, F (1,293) = 9.58) and beta (p = .0011, F (1,293) = 10.9) power significantly related to lower and higher GCS scores, respectively. Model fits were significantly improved when relative alpha power was added to LMMs predicting GCS (ABCD type: p = 3.2  10 À4 ; θα type: p = 5.1  10 À5 ), but relative beta power did not significantly improve model fits (ABCD type: p = .079, log likelihood stat = 3.08; θα type: p = .32, log likelihood stat = 0.98). Having examined spectral power as a predictor of GCS, we next examined it as a predictor of conscious state using GLMMs. No absolute power features significantly predicted conscious state, though we observed a trend for beta power (p = .058, F (1,293) = 3.63), with higher power corresponding to consciousness (t = 1.91); however, relative delta power did significantly predict conscious state (p = .0069, F (1,293) = 7.41), with lower power corresponding to consciousness (t = À2.72). Nonetheless, adding relative delta power to GLMMs that predicted conscious state from ABCD type or θα type did not improve model fit (the original models lacking relative delta power featured greater maximized log-likelihoods and thus no test was performed). Next, we used multiple linear regression with resampling to investigate whether ABCD type predicts chronic outcomes as measured with GOSe (again, types B, C, and D were grouped together to avoid outlier effects). Four patients missing GOSe score were excluded and four previously excluded patients with fewer than three EEG observations were reincluded here, see Table 2. The number of postinjury days to follow up did not correlate with GOSe scores (r = .05, p = .76) and thus was not considered to be a confound. For all hypotheses tested, the median t-statistic across resamples stabilized after~1,000 resamples ( Figure S4), that is, well within the number of resamples we performed (N = 9,999). ABCD type did not significantly predict GOSe (p = .37, t = 0.92). We then substituted θα type for ABCD type in the original multiple linear regression model and repeated the resampling procedure. As with ABCD type, θα type did not significantly predict GOSe (p = .52, t = 0.65). Neither absolute nor relative power in any frequency band significantly predicted GOSe. Finally, we used a data-driven clustering approach to determine whether clusters based on EEG spectral power (unnormalized) would predict GCS (n = 37) and/or GOSe (n = 33). Once again, we utilized N = 9,999 resamples to choose randomly one EEG observation per patient per resample. Two PCs were retained that, for all resamples, explained at least 80% of the variance in spectral power (resamples that did not explain this proportion of variance in the first two PCs were discarded and not counted toward N). K-means clustering was used to identify clusters in PC space (see Figure 4). Each resample yielded one cluster whose centroid had a positive PC 1 -coordinate and one cluster whose centroid had a negative PC 1 -coordinate (Figure 4d), and clusters were therefore labeled accordingly. Cluster labeling did not significantly predict GCS (p = .11, t = 1.64), conscious state (p = .32, t = 1.00), or GOSe (p = .68, t = 0.42). See Tables S2-S4 for p-values, t-statistics, and F-statistics from resampled tests. 3.2 | Relating thalamically-driven oscillations to mesocircuit recovery in a small sample (Table 3). We therefore correlated the proportion of channels with a θα peak (peak proportion), averaged across EEG observations within 48 hr of scanning, with fMRI connectivity. F I G U R E 3 Examples of ABCD model types. Power spectral densities (PSDs) from each channel are color-coded according to their ABCD type; unclassifiable channels are colored black. Peaks detected for classification are indicated with circles. Channels with a θα peak are dashed. Shaded areas are colored according to frequency band. (a) A-type EEG corresponding to a GCS score of 3. All channels were categorized as Atype, and peaks were only present in the delta band. (b) B-type EEG corresponding to a GCS score of 9. In total, one channel was categorized as A-type, nine channels as B-type, two channels as C-type, and one channel as unclassifiable (due to the presence of a beta peak without an accompanying theta or alpha peak). Eleven channels showed a θα peak. (c) C-type EEG corresponding to a GCS score of 7. In total, two channels were categorized as C-type, and the remaining nine channels were uncategorizable. Seven channels showed θα peaks. (d) D-type EEG corresponding to a GCS score of 14. In total, one channel was categorized as A-type, one channel as D-type, and the remaining nine channels were uncategorizable. Because D-type is a more progressive type than A-type, the tie between A-type and D-type (one channel each) is broken by D-type. Twelve channels showed a θα peak We found a trend relating peak proportion to fMRI connectivity between thalamus and PFC (r = .68, Pearson coefficient, p = .08, FDR corrected). Trend-level relationships were also observed between peak proportion and thalamo-striatal (r = .63, p = .10, FDR corrected) and striatal-pallidal BOLD signal coupling (r = .59, p = .10, FDR corrected). No relationship was observed between peak proportion and BOLD signal coupling between thalamus and PCC (r = .01, p = .97, FDR corrected). See Figure 5 for scatter plots and correlations. To ensure that these correlations were not overly sensitive to the 48-hr time limit used to select EEG observations, we also performed a sensitivity analysis and computed Pearson coefficients for time windows ranging 6-72 hr in length. Correlations appeared stable and maximized usable data when EEG observations were included within 13-51 hr of fMRI scanning ( Figure S5). Thus, our 48-hr time limit appears to maximize the amount of available data without including EEG observations outside of the observed window of stability. Note: Forty-four EEG observations from 11 patients were included in the correlation of EEG with fMRI connectivity measures. EEG observations were considered if they fell within 48 hr of MRI. Type proportion gives the proportion of EEG channels with a type progressed beyond the A-type (B, C, or D). Peak proportion gives the proportion of EEG channels with a θα peak. EEG time post-fMRI gives the number of hours before that the EEG observation took place after the patient's MRI scan (negative values indicate that the EEG observation preceded the MRI scan). Given the more favorable statistics of peak proportion (less skew and greater variance), this measure was correlated with fMRI connectivity ( Figure 5) after averaging across all available observations. also predict these variables, only relative alpha power significantly improves model fit when added as a predictor alongside ABCD type. | DISCUSSION Given its ability to predict conscious state, the ABCD model may have applications as a diagnostic biomarker, for example, to detect instances of covert consciousness (Huang et al., 2018;Monti et al., 2010) in patients lacking behavioral responsiveness. However, our findings suggest that, in acute patients, more progressive types (C and D) are rare, as patients may generally achieve this level of recovery after leaving the ICU. Accordingly, a more parsimonious categorization of EEG type based on the presence or absence of θα (4-12 Hz) peaks may be equally useful, as it concentrates specifically on thalamically-entrained cortical rhythms that form the core of the ABCD model (see Table 1). Furthermore, this approach has several practical advantages over the ABCD model: (a) all usable data are classifiable, (b) one classification will capture the majority of channels for all EEGs with an odd number of channels, and (c) because all data are classifiable, this approach does not introduce the sampling bias that occurs using the ABCD model. Finally, our study is the first to offer provisional evidence of the relationship between EEG and mesocircuit recovery, as measured with fMRI connectivity. Importantly, as predicted by the mesocircuit hypothesis, which emphasizes central thalamic projections to frontal cortex (Schiff, 2016), we found a stronger relationship between θα peak proportion and coupling between thalamus and PFC, versus coupling between thalamus and PCC. | EEG oscillations as a readout of mesocircuit recovery The mesocircuit model explains postinjury forebrain dysfunction in terms of inactive striatal medium spiny neurons (MSNs; Schiff, 2010Schiff, , 2016). MSN firing rates may be particularly sensitive to diffuse injury, as they require high levels of both dopaminergic neuromodulation and spontaneous background corticostriatal and thalamostriatal synaptic F I G U R E 5 Correlations of EEG (proportion of channels with θα peak or "peak proportion") with fMRI connectivity. Peak proportion was averaged across all usable EEG observations within 48 hr of MRI. Data points are sized proportionally to the number of averaged EEG observations and colored according to the mean hours elapsed between EEG and MRI (absolute value, warmer colors indicate greater mean time elapsed). Outliers (>3 scaled median absolute deviations from the median of either variable) are indicted with red circles around data points and were excluded from analysis. Dotted black lines represent the least-squares linear fit for included data points. We corrected for testing across four region of interest (ROI) pairings using false discovery rates (FDR). Correlations are reported as Pearson coefficients. (a) Peak proportion versus z-scored thalamo-striatal connectivity: r = .63, p = .10, FDR corrected (uncorrected: p = .07). Note the presence of two excluded outliers. (b) Peak proportion versus z-scored striatal-pallidal connectivity: r = .59, p = .10, FDR corrected (uncorrected: p = .07). Note the presence of one excluded outlier. (c) Peak proportion versus z-scored thalamo-prefrontal cortical (PFC) connectivity: r = .68, p = .08, FDR corrected (uncorrected: p = .02). (d) Peak proportion versus z-scored thalamo-posterior cingulate cortical (PCC) connectivity: r = .01, p = .97, FDR corrected (uncorrected: p = .97). Note the presence of two excluded outliers input to reach firing threshold (Grillner, Hellgren, Menard, Saitoh, & Wikström, 2005). When these necessary conditions are disrupted by diffuse, multifocal brain-injury, GABAergic MSNs go offline, and the GPi is released from striatal inhibition (Schiff, 2010). As a result, GPi powerfully inhibits central thalamus, leading to functional deafferentation of cortex ( Figure S6). Furthermore, the striatum receives weaker cortical and thalamic drive as a result of GPi inhibiting thalamus, leading to a positive feedback loop in which GPi is further disinhibited and central thalamus further inhibited. As originally proposed by Schiff, the ABCD model is a readout of mesocircuit recovery, with thalamically-driven EEG oscillations indicating progressive levels of thalamocortical integrity (Schiff, 2016). Consistent with this hypothesis, we found that ABCD type predicts conscious state, suggesting clinical application as a diagnostic biomarker in DOC. Because frontal cortical areas receive denser projections from central thalamus than other cortical areas (Deschenes, Bourassa, & Parent, 1996;Morel, Liu, Wannier, Jeanmonod, & Rouiller, 2005), frontal cortex is proposed to be disproportionately affected by mesocircuit dysfunction (Schiff, 2010(Schiff, , 2016. Accordingly, we found that the proportion of EEG channels displaying a θα peak is correlated with thalamo-prefrontal connectivity (r = .68), but not with thalamo-posterior cingulate connectivity (r = .01), in a small subsample of patients. Although neither relationship was significant after correcting for multiple comparisons, the former correlation appears promising and therefore warrants further investigation in larger samples. Furthermore, we also observed provisional, trend-level evidence relating EEG peak proportion to subcortical mesocircuit connections (thalamo-striatal and striatal-pallidal connectivity). Taken together, our findings in this small subset of patients are a promising, albeit inconclusive, indicator that EEG oscillations in the 4-12 Hz range may be reflective of mesocircuit recovery. | Improving the ABCD model While alternative models that predicted acute variables based on spectral power-used as benchmarks for the ABCD model-found that both absolute and relative alpha and beta power predicted GCS and that relative delta power predicted conscious state, only relative alpha power significantly improved the fit of any model with ABCD type as a predictor, as determined using log likelihood ratio tests. Adding absolute beta power, rather than relative alpha power, to the same model resulted in a trend-level improvement, but significantly improved model fit in the same LMM with θα type substituted for ABCD type. These results suggest that incorporating both information regarding peaks in frequency bands and the area under the curve in one or more frequency bands might improve the utility of the ABCD model as a diagnostic biomarker in DOC. Surprisingly, we found that even though the presence of alpha peaks in the ABCD model was associated with higher GCS scores, alpha power itself was associated with lower GCS scores in our models, though only after accounting for beta power and other covariates, as the raw correlation between alpha power and GCS was positive for both absolute and relative power when not controlling for other predictors. This multifaceted relationship between alpha oscillations and behavioral recovery underscores the potential importance of considering both oscillatory power and peaks. Furthermore, as the resonant frequency of alpha oscillations changes with development and aging (Chiang, Rennie, Robinson, Van Albada, & Kerr, 2011;Donoghue et al., 2020), the area under the curve in the alpha frequency range might be more sensitive to alpha activity than the presence or absence of a local maximum in the alpha range. For example, consider a slow 7 Hz posterior alpha rhythm peak that would still "leak" energy into the Finally, given that delta oscillations are widely regarded as indicators of cortical down states and unconsciousness (Buzsaki, 2006;Koch, Massimini, Boly, & Tononi, 2016;Massimini, Ferrarelli, Sarasso, & Tononi, 2012), one might be surprised by our finding that relative delta power, while predictive of conscious state, did not improve the fit of any peak-based model, especially given that these models did not already consider any peaks <4 Hz. While delta oscillations are clearly seen in states of unconsciousness including the slow wave sleep (Brown, Lydic, & Schiff, 2010;Franks, 2008;Murphy et al., 2011), anesthesia (Franks, 2008;Murphy et al., 2011;Purdon et al., 2013;Supp, Siegel, Hipp, & Engel, 2011), and DOC (Hussain et al., 2019;Kaplan, 2004;Sutter & Kaplan, 2012) including coma and the vegetative state (Sutter & Kaplan, 2012), high amplitude delta oscillations can nonetheless be observed in a variety of circumstances in which individuals are fully conscious and responsive, such as Angelman syndrome (Frohlich et al., 2020); for a comprehensive review of this and other cases, see Frohlich, Toker, and Monti (2021). Based on these other findings and our results herein, the consideration of parameters in the delta band may not be necessary for improving the ABCD model. Although the ABCD model outperformed a data-driven clustering approach based on PCA and K-means clustering, it is possible that further refinement of the clustering model or a more sophisticated unsupervised learning approach could yield competitive results. Furthermore, while the clustering approach assigned all EEG observations to a cluster, the ABCD model was unable to classify some EEG observations, resulting in a sampling bias revealed in our study using an LMM, where EEG observations corresponding to higher GCS scores were less likely to be classified. This is likely due to the fact that in conscious states, there are a greater number of "illegal" peak combinations that cannot be classified (e.g., a beta peak without any accompanying theta or alpha peak). This bias can be addressed by replacing ABCD type with θα type, which allows all EEG observations to be classified. We also observed that dynamic regimes from the ABCD model reflecting high levels of thalamocortical integrity (C and D types) were rare in acute patients. This is perhaps unsurprising, as more recovered patients would exhibit C and D types, but they are typically moved from the ICU to less intense care and, thus, not part of our acute sample. We must therefore consider whether circuit-level recovery precedes behavioral recovery early enough to be useful for predicting outcomes, as an ideal biomarker should predict a patient's pending recovery before the patient is moved out of the ICU. It is possible that the mere reemergence of theta oscillations (B-type), prior to that of other oscillations, already heralds recovery. However, we did not observe a predictive relationship between ABCD type (after grouping types B, C, and D together to avoid outlier effects) and chronic outcomes as measured with GOSe. This might be due to the differing sensitivities of the GCS and GOSe assessments, where the former is better suited for detecting the presence/absence of consciousness, whereas the latter is better suited for detecting recovery of daily living skills. Because ABCD types of B or higher were rarely observed in conscious states, ABCD type relates very closely to GCS but less well to GOSe. It is possible that had our sample included more instances of C-types and D-types, which might relate to recovery of daily living skills, a relationship with GOSe would have also been detected. Finally, the mean GCS score corresponding to EEG observations that were categorized as C-type or D-type did not exceed that of those categorized as B-type (Table 1, Figure 2c), suggesting that Ctype and D-type do not indicate progressive recovery beyond B-type. However, given the small number of EEG observations that were categorized as C-type or D-type, it is possible that we did not have sufficient data to observe the progressive hierarchy predicted by the ABCD model. GCS ranges for A-type (GCS = 3-15) and B-type (GCS = 4-15) were both broad, suggesting low specificity. A-type was observed at ceiling (GCS = 15), though conversely, B-type was never observed at floor (GCS = 3). Given these data, EEGs that have progressed beyond A-type likely indicate that patients have already begun recovering behavioral responsiveness. Besides relating EEG to behavioral data, an important aim of our study was to validate an automated procedure for categorizing EEG observations according to ABCD type using quantitative objective criteria. We found that our automated EEG classification corresponded significantly with contemporaneous behavior, suggesting that the ABCD model can be implemented computationally, without the need for manual scoring. However, we found that 10.6% of EEG observations could not be classified according to the ABCD model due to peak combination that are not defined by the model. Additionally, since each EEG channel had five possible classifications (A, B, C, D, or unclassifiable), one classification did not capture the majority of EEG channels in all instances. The foregoing issues are solved using a more parsimonious classification between two types, θαand θα+, with one type capturing the majority of EEG channels in all instances. We found that classification based on θα type performed equally well as ABCD classification in predicting GCS (both were strongly predictive) and GOSe (neither was predictive). The main advantage of the ABCD model was in improving prediction of conscious state when added to the GLMM with θα type as a predictor. Our findings suggest that the ABCD model performs similarly to a parsimonious and computationally simpler categorization scheme that infers thalamocortical integrity based on the presence of theta and/or alpha EEG oscillations in acute TBI patients. | Limitations and future directions Our study has a number of methodological limitations: (a) Because EEG data were collected in a clinical setting, EEG channel placement was variable and the number of channels common to all patients was low. This precluded the use of EEG source localization or spatial filtering (e.g., surface Laplacian) in our analysis. Furthermore, although patients with more progressive EEG types (C and D) were largely absent from our study due to the fact that EEG data were only collected in the ICU, we believe that our findings have greater translatability since the need for diagnostic and prognostic biomarkers is most prevalent in the acute stage. (b) Despite collecting a large amount of EEG data (multiple days) from each patient, our patient sample size was relatively small (38 patients analyzed total, 33 patients in the main analysis), and so caution should be used in generalizing these findings to larger patient populations. (c) Patients were administered a large number of different medications, the influences of which could not be entirely controlled in our study. However, by utilizing logistic PCA, we covaried for two PCs that explained roughly two thirds of the variance in medication data. (d) The GOSe, our measure of chronic (~6 month) outcome, was conducted in some cases as a phone interview and thus an indirect assessment. The indirect nature of the assessment may have added substantial noise that could reduce correlations with EEG measures, possibly explaining our absence of findings when relating EEG to GOSe. (e) Our k-means clustering approach to predicting GCS did not perfectly mirror that based on EEG spectral peak classification (ABCD type or θα type), as the latter approach utilized LMMs and thus discarded four patients with insufficient longitudinal data who were included in the former approach. However, we believe that our results unambiguously show no relationship between data-driven clusters and GCS; thus, the asymmetry in our two analyses is likely inconsequential, though we acknowledge that a more sophisticated or better developed clustering or unsupervised learning approach could perhaps yield competitive results. (f) Our correlation of EEG with fMRI measures was limited by the fact that many patients were not scanned until leaving the ICU, at which point EEG was no longer acquired. Thus, only 11 patients had at least one EEG observation within 48 hr of fMRI. We were therefore likely underpowered to detect significant relationships between neural oscillations (EEG) and mesocircuit integrity (fMRI). However, after removing outliers that would easily influence our small sample, we observed statistical trends. Two future directions are strongly encouraged by our results. Firstly, given the paucity of C and D types found in our data from acute patients in the ICU, future work should apply our automated peak fitting approach to data from patients moved to less intensive care, for whom more progressive types (C and D) may be observed. Such an investigation might be more successful in relating ABCD type to long-term outcome, given the potentially larger spread in ABCD type after sampling more recovered patients. This sample might also help determine if a more parsimonious approach based on θα type is indeed preferable over the ABCD model, even when patients are more likely to exhibit a large spread in ABCD types. Secondly, we advocate for further work relating the proportion of EEG channels displaying θα peaks to mesocircuit functional connectivity in larger samples. Given the large correlation we observed between EEG peak proportion and thalamo-prefrontal connectivity (r = .68), only three additional patients (n = 14) are needed to achieve >80% statistical power. Thus, even moderately larger samples may be beneficial in detecting significant correlations. Additionally, dynamic causal modeling (Friston, Harrison, & Penny, 2003) applied to larger datasets may be useful for inferring the directionality of interactions between mesocircuit nodes, for example, to show that EEG peak proportion correlates with increased thalamic drive to cortex or decreased pallidal inhibition of central thalamus. | CONCLUSIONS Noninvasive readouts of mesocircuit recovery are desirable for diagnosing DOC in TBI patients. Because circuit-level recovery should precede behavioral recovery, such a readout may also be more useful than behavioral scales (e.g., GCS) in predicting which patients will recover consciousness. Our findings show that the reemergence of neural oscillations tracks recovery in acute TBI patients, as relevant EEG measures correspond strongly and significantly with behavioral responsiveness and conscious state in the ICU while also exhibiting a trend of greater spatial extent (i.e., peak proportion) with increasing thalamocortical connectivity. These results suggest that thalamicallydriven EEG oscillations may serve as diagnostic biomarkers in TBI and DOC, although the failure to detect a relationship between EEG and chronic outcome in our patient cohort may be only due to the low occurrences of C and D types in our sample, and thus, may still have prognostic value when investigated at a less acute timepoint.
Blast-Exposed Veterans With Mild Traumatic Brain Injury Show Greater Frontal Cortical Thinning and Poorer Executive Functioning Objective: Blast exposure (BE) and mild traumatic brain injury (mTBI) have been independently linked to pathological brain changes. However, the combined effects of BE and mTBI on brain structure have yet to be characterized. Therefore, we investigated whether regional differences in cortical thickness exist between mTBI Veterans with and without BE while on deployment. We also examined whether cortical thickness (CT) and cognitive performance differed among mTBI Veterans with low vs. high levels of cumulative BE. Methods: 80 Veterans with mTBI underwent neuroimaging and completed neuropsychological testing and self-report symptom rating scales. Analyses of covariance (ANCOVA) were used to compare blast-exposed Veterans (mTBI+BE, n = 51) to those without BE (mTBI-BE, n = 29) on CT of frontal and temporal a priori regions of interest (ROIs). Next, multiple regression analyses were used to examine whether CT and performance on an executive functions composite differed among mTBI Veterans with low (mTBI+BE Low, n = 22) vs. high (mTBI+BE High, n = 26) levels of cumulative BE. Results: Adjusting for age, numer of TBIs, and PTSD symptoms, the mTBI+BE group showed significant cortical thinning in frontal regions (i.e., left orbitofrontal cortex [p = 0.045], left middle frontal gyrus [p = 0.023], and right inferior frontal gyrus [p = 0.034]) compared to the mTBI-BE group. No significant group differences in CT were observed for temporal regions (p's > 0.05). Multiple regression analyses revealed a significant cumulative BE × CT interaction for the left orbitofrontal cortex (p = 0.001) and left middle frontal gyrus (p = 0.020); reduced CT was associated with worse cognitive performance in the mTBI+BE High group but not the mTBI+BE Low group. Conclusions: Findings show that Veterans with mTBI and BE may be at risk for cortical thinning post-deployment. Moreover, our results demonstrate that reductions in CT are associated with worse executive functioning among Veterans with high levels of cumulative BE. Future longitudinal studies are needed to determine whether BE exacerbates mTBI-related cortical thinning or independently negatively influences gray matter structure. Objective: Blast exposure (BE) and mild traumatic brain injury (mTBI) have been independently linked to pathological brain changes. However, the combined effects of BE and mTBI on brain structure have yet to be characterized. Therefore, we investigated whether regional differences in cortical thickness exist between mTBI Veterans with and without BE while on deployment. We also examined whether cortical thickness (CT) and cognitive performance differed among mTBI Veterans with low vs. high levels of cumulative BE. Methods: 80 Veterans with mTBI underwent neuroimaging and completed neuropsychological testing and self-report symptom rating scales. Analyses of covariance (ANCOVA) were used to compare blast-exposed Veterans (mTBI+BE, n = 51) to those without BE (mTBI-BE, n = 29) on CT of frontal and temporal a priori regions of interest (ROIs). Next, multiple regression analyses were used to examine whether CT and performance on an executive functions composite differed among mTBI Veterans with low (mTBI+BE Low, n = 22) vs. high (mTBI+BE High, n = 26) levels of cumulative BE. Results: Adjusting for age, numer of TBIs, and PTSD symptoms, the mTBI+BE group showed significant cortical thinning in frontal regions (i.e., left orbitofrontal cortex [p = 0.045], left middle frontal gyrus [p = 0.023], and right inferior frontal gyrus [p = 0.034]) compared to the mTBI-BE group. No significant group differences in CT were observed for temporal regions (p's > 0.05). Multiple regression analyses revealed a significant cumulative BE × CT interaction for the left orbitofrontal cortex (p = 0.001) and left middle frontal gyrus (p = 0.020); reduced CT was associated with worse cognitive performance in the mTBI+BE High group but not the mTBI+BE Low group. INTRODUCTION The use of improvised and other explosive devices-such as rocket propelled grenades and mortar rounds-during the conflicts in Afghanistan and Iraq has led to a stark increase in the prevalence of combat-related blast exposure (BE) in the military population. Indeed, more than 60% of United States (U.S.) service members returning from the Middle East reported two or more BEs during their deployment (1). Similarly, among a convenience sample of Operation Enduring Freedom/Operation Iraqi Freedom/Operation New Dawn (OEF/OIF/OND) Veterans, nearly 80% reported at least one close range BE (within 100 m) while overseas (2). Such BE, combined with the improvement of combat protective gear and medical response methods, has led to unprecedented rates of certain non-lethal blast-related injuries within returning service members. Although musculoskeletal injuries, hearing loss, and vestibular dysfunction are common consequences of BE, of particular concern are the high rates at which BE results in mild traumatic brain injury (mTBI) among OEF/OIF/OND Veterans (3,4). The physical mechanisms of blast-related neurotrauma are complex and likely distinct from those mechanisms involved in pure blunt-force injuries. Conceptually, explosive detonation results in an over-pressurized shockwave-or primary blast wave-that transmits through the skull and directly interfaces with, displaces, or damages neural tissue (5,6). This primary blast wave may also cause rapid physical displacement of blood from the abdominal area to the cranial vault, damaging the cerebrovasculature and blood brain barrier (7)(8)(9). Additionally, the percussive forces associated with BE may cause bluntforce injury by propelling debris into a soldier's skull, and/or causing the skull to make impact with other solid objects. Both blast and blunt forms of injury are thought to account for the acute clinical signs and symptoms of mTBI (i.e., loss of consciousness [LOC], alteration of consciousness [AOC], and posttraumatic amnesia [PTA]). Further, the quantification of BE itself is challenging given that the intensity of explosive forces resulting in mTBI is difficult to operationalize and characterize. While it has been established that high-pressure BE may cause extensive neural damage in humans (10), the majority of OEF/OIF/OND Veterans are exposed to significantly smaller thresholds of BE-from different proximities-which also originate from various types of explosives, highlighting the difficulty in characterizing the nature and extent of blast-related injury in this population. During the past decade, considerable efforts have been placed on characterizing the pathophysiological consequences, or precise neural and white matter changes, associated with mTBI. Advanced neuroimaging techniques (i.e., diffusion tensor imaging [DTI], resting state functional [rsfMRI]) have revealed that a host of structural and functional brain changes occur in military service members with blast-related mTBI [for review see (11)]. These include macro-and microstructural white matter alterations, cortical thickness and volumetric reductions, as well as functional network and connectivity changes. Across the various neuroimaging findings among mTBI samples, frontal and temporal regions appear to be especially vulnerable, although widespread, diffuse damage has also been observed (11)(12)(13). Importantly, the nature of brain changes may fundamentally differ based on the manner in which the injury was sustained (e.g., blast/blunt force combination, blast only, blunt only), although the precise independent contributions of each mechanism are especially challenging to disentangle given that they frequently co-occur at the time of injury. While BE may frequently result in an mTBI, low levelor subconcussive-BE may exert its own negative influence on the brain. For example, studies of OEF/OIF deployed service members have shown that BE, independent of diagnosis of mTBI, was associated with an increased likelihood of having decreased white matter microstructural integrity compared to controls (14,15). These results were further corroborated by another study that examined serum markers of neuronal injury (e.g., ubiquitin C-terminal hydrolase-L1, αII-spectrin breakdown products, and glial fibrillary acidic protein) in members of the New Zealand Defense Force who did not experience an mTBI while participating in explosives training (16). Indeed, results revealed that (1) several participants showed increased levels of serum biomarkers of neuronal injury (i.e., ubiquitin C-terminal hydrolase-L1, αII-spectrin breakdown products) after low-level, subconcussive BE, and (2) higher levels of a serum biomarker composite were significantly associated with poorer performance on a neurocognitive composite. Similarly, (17) found that instructors, relative to students, endorsed more severe neurological symptoms, worse recognition memory, and fMRI differences after a 2-week period of subconcussive BE during a breacher basic training course; the authors attributed these differences to the fact that instructors, by nature of their profession likely have greater cumulative lifetime levels of blast exposure. Importantly, results from human studies align well with animal models with respect to neuropathological and neurobehavioral changes, although the subconcussive effects of BE can be difficult to operationalize and model in both human and animal studies. Nevertheless, experimental manipulation of peak pressure in mice and rats has revealed that even low levels induce neuronal loss, white matter alterations, cell signaling disruptions, and behavioral changes (18)(19)(20). Moreover, ultrastructural brain changes, poorer motor and memory performance, as well as increased anxiety levels, have been observed in mice exposed to primary low-intensity blast in the absence of head motion (21). Recent research suggests that cumulative BE warrants careful consideration in Veteran samples, as several studies have revealed a dose-response relationship between BE, neurologic changes, and poor behavioral outcomes. For example, Ivanov et al. (22) recently found that a greater number of BEs (whether they resulted in a head injury or not) was significantly associated with reduced white matter microstructural integrity of the cingulum bundle. Similarly, using F-fluorodeoxyglucose (FDG) positron emission tomography (PET), greater cumulative BE was significantly associated with decreased neuronal activity in several regions of both the cerebrum and cerebellum of previously deployed military service members (23). Finally, greater cumulative BE has also been linked to negative behavioral outcomes, including poorer performance on verbal memory (24) and worse post-concussive symptom reporting (25). Combined, this literature not only suggests that BE itself may be detrimental to brain structure and function, but that individuals with greater cumulative levels of subconcussive BE may acquire more significant brain pathology, and thus be at an increased risk for worse clinical and functional outcomes. Given that the combination of BE and mTBI may be especially deleterious to brain structure and cognition, we examined such effects by: (1) investigating regional gray matter morphological differences (i.e., cortical thickness) in vulnerable frontal and temporal regions in mTBI Veterans who were and were not exposed to blast (mTBI+BE vs. mTBI-BE); and (2) determining the relationship between cortical thickness and cognitive outcomes across different levels of BE. We hypothesized that BE would be associated with increased cortical thinning, and that reduced cortical thickness would be associated with worse cognitive performance in those with higher levels of cumulative BE. To our knowledge, this study represents the first to explore the neural and cognitive consequences of BE within the context of mTBI. Participants and Procedures This sample included 80 OEF, OIF/OND Veterans with a history of mild TBI who were divided into those who were blast-exposed (n = 51, mTBI+BE) and those with no blast exposure (n = 29, mTBI-BE). Participants were recruited from posted paper and television advertisements located throughout the VA San Diego Healthcare System (VASDHS). Study procedures consisted of neuropsychological testing and the completion of a TBI clinical interview, self-report questionnaires, and magnetic resonance imaging (MRI) brain scans. Neuropsychological testing, clinical interviews, and questionnaire completion took place at the Veterans Medical Research Foundation located at the La Jolla VASDHS campus. MRI scans occurred at the University of California, San Diego (UCSD) Center for Functional MRI. This study was carried out in accordance with the recommendations of Institutional Review Boards (IRB) of the VASDHS and University of California, San Diego. The protocol was approved by VASDH and UCSD IRBs. All subjects gave written informed consent in accordance with the Declaration of Helsinki. Diagnosis of mild TBI was based upon the guidelines detailed in the (26), which was defined as an LOC < 30 min, AOC up to 24 h, and/or PTA < 24 h. A "total number of lifetime TBIs" was created by summing the total number of injuries determined to have met VA/DoD criteria for mTBI for each participant. Additionally, a "most significant TBI" variable was created by directly comparing the presence and duration of LOC vs. AOC for each mTBI; injuries where an LOC was sustained were considered more severe than those with an AOC only. Finally, the "months since most recent TBI" was determined by calculating the difference between each participants' testing and their last reported mTBI. The TBI clinical interview assessed head injuries sustained prior to, during, and following any military deployment. Under the direct supervision of a neuropsychologist (DS, LDW), trained graduate-level students and/or post-baccalaureate research assistants administered TBI history interviews. This interview allows for comprehensive assessment and staging of up to 10 lifetime brain injuries and was adapted from the VA Semi-Structured Clinical Interview for TBI (27). During the interview, each participant was queried about the context (e.g., military vs. non-military event) and mechanism (blast-related vs. blunt/mechanical force) of each reported head-injury. Since medical records pertaining to injuries sustained overseas and in combat settings are frequently not available or documented, we relied on retrospective self-report of critical information related to the presence and duration of any reported loss of consciousness (LOC), alteration of consciousness (AOC), and/or posttraumatic amnesia (PTA) to determine whether the injury met diagnostic criteria for mild TBI. However, patient's VA medical charts were reviewed for consistency of head injury reporting during our comprehensive clinical interview. Participants were first queried about the number of times they were exposed to any blast detonation(s) that occurred within 100 meters (i.e., the distance of a professional football field) while on deployment. For each reported BE, details about the location, context (combat vs. non-combat), and direction (i.e., front, back, left, right) from which the BE was initiated were coded. Next, BE was categorized as a concussive (i.e., reported LOC, AOC, or PTA) or subconcussive (i.e., did not result in clinical symptoms of LOC, AOC, or PTA) injury. Participants with mTBI (due to blunt or blast-related mechanisms of injury) who also reported experiencing at least one subconcussive BE during their military service were considered to belong to the mTBI+BE group. mTBI participants who denied any exposure to blast while on deployment were considered to belong to the mTBI-BE group, and by nature of operationalization only had blunt mTBIs. Given that we were also interested in exploring the negative effects of cumulative BE on brain structure and function, the median number of subconcussive BEs that occurred within 100 meters was calculated and used to further divide the mTBI+BE group into those with low (n = 22, mTBI+BE Low) vs. high (n = 26, mTBI+BE High) levels of subconcussive BE. Importantly, these dichotomizations are sample specific, as cumulative levels of blast exposure may differ across other samples, branches, or professions within the military. The following exclusion criteria were applied to the study sample: (1) history of any TBI that was classified as moderate (LOC > 30 min but <24 h, AOC > 24 h, PTA > 1 day but <7 days) or severe (LOC ≥ 24 h, AOC > 24 h, or PTA ≥ 7 days); (2) history of any neurological disorder (e.g., epilepsy, multiple sclerosis, stroke, chronic fatigue syndrome) other than TBI; (3) history of a major mental illness (e.g., schizophrenia, bipolar, or psychotic disorder) other than depression or post-traumatic stress disorder; (4) current substance/alcohol abuse as per Diagnostic and Statistical Manual of Mental Disorders-Fourth Edition, Text Revision (DSM-IV-TR) criteria (28); (5) current or prior history of substance/alcohol dependence as per DSM-IV-TR criteria; (6) a positive toxicology screen as measured by the Rapid Response 10-drug Test Panel; (7) any contraindications to MRI scanning (e.g., pregnancy, presence of metal); (8) any gross abnormalities, visible lesions, cortical contusions on T1 structural MRI scans, and (9) Self-Report Symptom Rating Scales Participants completed self-report symptom rating scales. The PTSD Checklist (PCL-M) was used to capture current levels of posttraumatic stress (31). The Beck Depression Inventory-II (BDI-II) was used to capture current levels of depressive symptoms (32). The Neurobehavioral Symptom Inventory (NSI) was used to assess current levels of post-concussive symptoms (33). Executive Functions Factor Participants were administered the following neuropsychological tests which emphasized executive functions, given that this cognitive domain is most commonly affected in mTBI samples: Trail Making and Verbal Fluency tests from the Delis-Kaplan Executive Function System [D-KEFS; (34)] and Wisconsin Card Sorting Test [WCST; (35)]. Additionally, the Reading subtest of the Wide Range Achievement Test 4 [WRAT4; (36)] was administered. Three mTBI+BE participants did not complete all of the neuropsychological testing and thus were excluded from secondary cognitive analyses. Raw scores were converted to demographically corrected standardized scores (e.g., scaled scores or T-scores) using accompanying normative data for the following neuropsychological variables: WCST Total Errors, WCST Perseverative Errors, DKEFS Verbal Fluency Switching Total Correct, DKEFS Verbal Fluency Accuracy, and DKEFS Number-Letter Switching. Next, the five variables were reduced into one executive functions factor using principal component analysis with Varimax rotation. The executive functions factor was determined to have acceptable internal consistency (α = 0.722). Neuroimaging Data Acquisition Participants were scanned on a 3-Tesla General Electric MR750 system with an eight-channel head coil. A high-resolution T1 anatomical scan was acquired in the sagittal plane using a 3D FSPGR sequence with the following parameters: FOV = 24 cm, 256 × 192 matrix, TR = 8.1 ms, TE = 3.192 ms, flip angle = 12 • , TI = 550 ms, bandwidth = 31.25 kHz, and 172 1.2 mm slices. After image acquisition, all T1 images underwent visual inspection for quality control purposes in an effort to ensure any artifacts that might affect image processing (e.g., motion, field inhomogeneity) were minimal. Neuroimaging Processing Cortical surfaces on all T1 images were reconstructed and parcellated into regions of interest (ROIs) using FreeSurfer 5.1 recon-all processing pipeline (37). FreeSurfer-a freely available cortical and subcortical segmentation and parcellation software suite-utilizes a series of automated imaging algorithms to (1) remove non-brain tissue, (2) conduct a Tailarach transformation, (3) segment cortical and subcortical white and gray matter structures, (4) perform nonparametric nonuniform intensity normalization of intensity values, (5) tesselate gray and white matter boundaries, (6) topology correct, (7) surface deform intensity gradients to optimally place gray/white and gray/CSF borders at the location where the greatest shift in intensity defines the transition to the other tissue class (38,39). Next, data undergoes surface inflation and spherical registration to match individual cortical folding patters to expected cortical geometry across subjects. This produce a mesh of the pial and the white matter surfaces (38,39). Cortical thickness was calculated as the measure of the distance (in millimeters) between the gray/white matter boundary to the gray matter/cerebral spinal fluid boundary at each vertex on the cortical surface. Importantly, cortical thinning is thought to represent trauma-induced synaptic pruning or apoptosis. FreeSurfer's measurement of cortical thickness has been validated using both manual (40) and histological analysis techniques (41). The Desikan-Killiany atlas was used to parcellate and label each hemisphere into 32 independent regions (42). The weighted average of several smaller ROIs [see (42) for these precise subdivisions] were used to obtain a mean cortical thickness value for each hemisphere in the following frontal and temporal lobe ROIs: (1) superior frontal gyrus (SFG), (2) middle frontal gyrus (MFG), (3) inferior frontal gyrus (IFG), (4) orbitofrontal cortex (OFC), (5) anterior cingulate cortex (ACC), (6) medial temporal lobe (MTL), and (7) lateral temporal lobe (LTL). See Figure 1 for a depiction of the ROIs utilized in this study. Statistical Analyses Analyses of variance (ANOVAs) were performed to determine whether the groups (mTBI+BE vs. mTBI-BE) differed on basic demographic variables, quantitative TBI injury characteristics, and self-report symptom rating scales. Chi-squared analyses were utilized to examine group differences on categorical demographic and TBI injury variables. Analyses of covariance (ANCOVAs) were used to determine whether the groups differed on cortical thickness ROIs. Regression analyses were used to determine whether cortical thickness was associated with cognition. All statistical analyses were conducted using the Statistical Package for the Social Sciences (SPSS) version 21 (SPSS IBM, New York, USA). Sample Demographic and Injury Characteristics The mTBI+BE group reported an average of 11.25 BEs (Median = 4, Range = 1-149). Participant demographics are presented in Table 1. The mTBI+BE group did not differ from the mTBI-BE group on age, ethnicity, education, or psychiatric symptomatology (all p-values > 0.05). However, the mTBI+BE group had a significantly higher proportion of men (p = 0.001), greater number of lifetime TBIs (p < 0.001), and more severe post-concussive symptoms (p < 0.02) relative to the mTBI-BE group. The mTBI+BE group also had a higher proportion of blast-related TBIs for their most significant injury (p < 0.001) and differed by branch of service (p < 0.001) relative to the mTBI-BE group. Cortical Thickness Differences Across mTBI+BE vs. mTBI-BE Groups A series of ANCOVAs were performed in order to determine whether the groups differed in cortical thickness of lateralized frontal ROIs. ANCOVAs controlling for age, sex, PCL-M total score, and total number of TBIs revealed a main effect of group such that the mTBI+BE group displayed significantly thinner cortices relative to the mTBI-BE group for the left MFG A second series of ANCOVAs were performed in order to determine whether the groups differed in cortical thickness of temporal ROIs. ANCOVAs controlling for age, sex, PCL-M total score, and total number of TBIs revealed there were no significant group differences for the lateralized MTL (p's > 0.237) and LTL (p's > 0.245). Cortical Thickness and Cognitive Associations in the mTBI+BE Group A set of multiple linear regressions was performed in an effort to determine whether, independent of age and PTSD symptoms, there was a significant association between cortical thickness of the ROIs that differed between the mTBI+BE and TBI-BE groups and our executive functions factor. We chose to focus on brain-behavior relationships in the ROIs that significantly differed between the groups (i.e., left orbitofrontal cortex, left middle frontal gyrus, and right inferior frontal gyrus) in an effort to minimize multiple comparisons while better understanding the behavioral significance of the observed brain differences for all subsequent analyses. In each model, age, PCL-M total score, and a brain ROI were entered as independent variables, whereas the executive functions factor was the dependent variable. Results revealed no significant associations for any of the ROIs and our executive functions factor (all p-values > 0.07). Cortical Thickness and Cognitive Associations by BE Thresholds in the mTBI+BE Group Secondary multiple regression analyses were performed in order to determine whether BE thresholds moderated the association between cortical thickness and performance on the executive functions factor. The mTBI+BE group was dichotomized into those with low (n = 22, mTBI+BE Low) and high (n = 26, mTBI+BE High) BE via a median split of the total number of self-reported blast exposures (Median = 4). Three subjects did not complete all neuropsychological testing and thus were excluded from subsequent analyses. For each model, the executive functions factor was entered at the dependent variable; independent variables included age, the PCL-M total score, BE grouping (mTBI+BE High vs. Table 2 and Figure 2). Finally, there was no significant right IFG x blast exposure interaction on the executive functions factor (β = −1.785, t = −9.23, p = 0.36). DISCUSSION We explored whether BE was associated with reduced cortical thickness as well as the influence of cumulative BE on cognition in Veterans with history of mTBI. Results showed that, relative to those with mTBI who had not been exposed to blast while on deployment, Veterans with both BE and mTBI demonstrated significantly thinner cortices in frontal regions of the cerebrum. Moreover, in those with greater cumulative BE, reduced cortical thickness was significantly associated with poorer performance on tasks of executive function. These findings suggest that there is an association between cortical thinning and concomitant cognitive impairment post-deployment Veterans with mTBI and BE. Although cortical thinning has previously been demonstrated within Veteran mTBI samples (43)(44)(45), no known studies have explored whether the combination of BE and mTBI negatively influences gray matter structure. As hypothesized, we found that those with both mTBI and BE showed greater frontal cortical thinning relative to those with mTBI alone. Results suggest that among Veterans with mTBI, BE -even at subconcussive levels-there may be an increased risk for negative brain changes. From a pathophysiological perspective, it remains unclear as to whether subconcussive BE exerts its own negative influence on gray matter structure or merely exacerbates mTBI-related cortical thinning. Findings from the animal literature have shown that BE of varying intensities (with associated head oscillation) results in neuronal loss (46) and the accumulation of tau, a protein associated with neurodegenerative diseases such as Alzheimer's disease or chronic traumatic encephalopathy in humans (47,48). While speculative, we suspect that independent and interactive processes co-occur to produce poorer outcomes. Although there is considerable heterogeneity in both mTBI and blast-related injury, a synergistic effect may occur, particularly in overlapping areas of damage. There is a critical need for future, longitudinal studies to explore the precise mechanisms underlying gray matter changes due to BE in humans. Previous behavioral and neuroimaging studies in Veterans have largely focused on characterizing the distinct effects of (1) blunt vs. blast-related mTBI, (2) subconcussive primary BE vs. pure blast mTBI (i.e., concussive injury due to primary BE without blunt injury), and/or (3) mTBI due to blunt or blast mechanisms vs. controls (15,22,(49)(50)(51)(52). While some of these studies have found cognitive, symptom, or neuroimaging differences across groups (15,22,51,52), others have failed to find any categorical differences (15,49,50,53). Our results suggest that failure to consider BE may explain-at least to some degree-the disparate findings observed across prior studies. Indeed, although findings of this study suggest that BE is an important factor influencing outcomes, few studies report or explore BE in their observed findings. This is especially important given that a recent study of military service members showed that approximately two-thirds of their mTBI sample reported BE while on deployment (4). Although no direct comparisons in symptom reporting between those with mTBI who were and were not BE were made, it is possible that the high prevalence of BE may have resulted in brain changes that contributed to the large proportion of individuals reporting postconcussive symptoms that persisted beyond the expected 3 month recovery window. Interestingly, several recent studies have demonstrated that, independent of mTBI, close-range BE (i.e., within 10 m) is associated with both altered functional connectivity (54, 55) and verbal memory deficits in OEF/OIF/OND Veterans (24). That is, at least with respect to very close-range blast, similar blast-related brain and behavioral associations were observed in Veterans with or without mTBI. However, although proximity may be a critical factor with respect to the intensity and severity of blast-related neural injury, the close-range BE group within these studies reported a significantly greater number of BEs relative to the distance BE group (24,54,55). Thus, these findings may be reflective of cumulative BE, as opposed to (or in addition to) distance, as secondary analyses within one of these studies revealed that multiple distance BEs was also associated with reduced verbal memory performance (24). Finally, it is worthwhile to note that a greater proportion of Marines were represented in our mTBI+BE group, and others have shown that both Army and Marines service members are more likely to sustain close-range BE (24). Future research is needed to in order to (1) clarify how cumulative BE may differ as a function of occupation, combat, gender, training, or weaponry utilized and (2) quantify distinct thresholds of BE that may have negative brain or behavioral consequences in Veteran service members. Results from our study also revealed that reduced cortical thickness was significantly associated with poorer performance on our executive functions factor in those with mTBI who had higher levels of cumulative BE, but not in those with mTBI who had lower levels of cumulative BE. While animal studies have shown that exposure to a single blast is sufficient to evoke neuronal loss and white matter degradation, recent work has shown that the severity of these brain changes are especially pronounced in mice with multiple BEs (23). Moreover, when compared to sham-exposed control mice, impaired motor performance was only observed in mice with multiple, as opposed to a single BE. It is possible that a certain threshold of neuronal damage must be reached before behavioral relationships are observed. A recent case study revealed that a Veteran with repeated BEs that never met clinical criteria for mTBI demonstrated significant white matter alterations, as well as impairments in processing speed, recognition memory, working memory, and executive function when compared to a reference control group (56). Critically, these results align with those of the present study in that they demonstrate that cumulative BE is associated with poorer behavioral outcomeswith greater numbers of BEs being more deleterious than a single blast. The translation of animal studies of BE to explorations in humans has proven quite difficult and is not without serious limitations. Animal studies take place in controlled settings where distinct models of TBI (e.g., fluid percussive injury, controlled cortical impact, close head-injury) and/or types of explosives (e.g., live wire, shock tube, pressure generators) can be manipulated. Unfortunately, these factors are difficult to characterize in humans, as blast or blunt mechanisms of injury may independently or simultaneously occur, the precise quantification of blast intensity is virtually impossible given the combat setting, and preexisting vulnerabilities (e.g., substance use, individual differences in brain architecture and volume) may be at play. One recent human neuropathological study tried to account for some of these factors by directly comparing brain specimens of male service members with chronic (n = 5) or acute/severe BE (n = 3) to civilians with impact, or blunt-related TBI (n = 5), prior exposure to opiates (n = 5), or no neurological conditions (57). Interestingly, both BE groups, which also consisted primarily of patients with antemortem PTSD diagnoses, demonstrated significantly greater astroglial scarring relative to the civilian groups. While intensity or severity of blast exposure could not be corroborated with objective data, the authors provide some evidence of distinct blast-related pathology relative to the other groups. In another human neuropathological study (58), the authors compared brain specimens of Veterans with history of BE (n = 5) to several control groups-controls with history of opiate overdose (n = 6), controls with anoxic-ischemic encephalopathy (n = 6), controls with history of non-blast TBI (n = 5), and healthy controls (n = 7). The authors found evidence for increased amyloid precursor protein (APP)-positive axonopathy in blast-exposed Veterans relative to the control groups, providing additional support that BE results in distinct neuropathological patterns. Nevertheless, precise quantification of BE in humans is difficult, and other factors that predate military experience may also play a contributory role in our observed findings. It is important to note that the current study has several limitations that warrant discussion. First, as is the case with most military TBI studies, we relied heavily upon retrospective self-report of both BE and head injury events that may have occurred many months or years prior; therefore, these events are subject to recall bias and could not be confirmed with medical documentation or field records at the time of injury. Similarly, while we conducted comprehensive TBI and BE interviews, many of the mTBI-BE group had been deployed to a combat zone and it is possible that some of members of the mTBI-BE group may failed to recall BEs that occurred within (or beyond) 100 m. Secondly, this was a cross-sectional research study, and results do not demonstrate that there have been changes in cortical thickness, only that thinner cortices were observed in Veterans with mTBI who were BE relative to those without BE. In other words, these are merely observed associations in one sample of Veterans and future longitudinal studies are needed to disentangle whether our observations represent changes or are merely the result of pre-injury differences in cortical thickness across the groups. There is evidence of neurological and psychiatric symptoms differences between BE and non-BE controls (59). Thus, additional comparison of BE and non-BE Veteran controls with no TBI history may help in clarifying the independent and/or synergistic effects of BE on mTBI, and we are currently collecting blast-related information for a subset of Veteran controls in order to clarify this possible relationship. Additionally, the mTBI+BE group was composed of mixed mechanisms of injury (i.e., blunt or blast-related mTBI) and had multiple TBIs relative to the mTBI-BE group. Although we controlled for the total number of TBIs in our analyses, it is difficult to disentangle the unique contributions of subconcussive blast, mTBI, and repetitive mTBI on cortical thinning in the present study. Moreover, although PTSD has also been linked to cortical thinning (60,61) and cognition in Veteran mTBI samples (62), our mTBI+BE and mTBI-BE groups did not differ on this variable and we controlled for PTSD symptom severity in our analyses. Finally, additional work in this area should infuse other imaging metrics (e.g., arterial spin labeling) that may be more sensitive to blast-related brain changes, especially since mounting experimental animal evidence shows that blast-related head injury is associated with greater vascular pathology when compared to traditional blunt force TBI. CONCLUSION This is the first known study to demonstrate that the combination of BE and mTBI (due to either blast or blunt force mechanisms of injury) negatively influences gray matter structure. Additionally, our results provide preliminary evidence that mTBI Veterans with both high levels of BE and reduced cortical thickness demonstrate reduced executive functioning, which is striking given that our sample is comprised of those with mild neurotrauma who are, on average, many years removed from their head injury event. Taken together, these findings suggest that Veterans with both mTBI and exposure to higher levels of blast may be at increased risk for both cerebral and behavioral changes post-deployment. Future prospective studies are needed to disentangle (1) the precise pathophysiological mechanisms underlying cortical thickness changes associated with BE, mTBI, and these comorbid conditions, and (2) the extent to which outcomes may differ based on distance, intensity, or severity of BE, and (3) the negative consequences of repetitive mTBI as opposed to repetitive subconcussive BE. AUTHOR CONTRIBUTIONS AC and LD-W contributed to manuscript conception.
Discreteness effects in a reacting system of particles with finite interaction radius An autocatalytic reacting system with particles interacting at a finite distance is studied. We investigate the effects of the discrete-particle character of the model on properties like reaction rate, quenching phenomenon and front propagation, focusing on differences with respect to the continuous case. We introduce a renormalized reaction rate depending both on the interaction radius and the particle density, and we relate it to macroscopic observables (e.g., front speed and front thickness) of the system. I. INTRODUCTION Most of the chemical and biological processes that appear in Nature involve the dynamics of particles (e.g., molecules or organisms) that diffuse and interact with each other and/or with external forces [1,2,3]. If the total number of particles per unit volume, N, is very large, a macroscopic description of the system in terms of continuous fields, e.g., density or concentration, is usually appropriate. A prototypical model for these reaction-diffusion systems is the Fisher-Kolmogorov-Petrovskii-Piskunov (FKPP) equation [4,5] describing the spatio-temporal evolution of a concentration where D is the diffusion coefficient, p is the reaction rate that determines the characteristic reaction time, τ = 1/p, and θ(x, t) is the concentration field (for simplicity we have assumed one spatial dimension). It is well known [1,6,7] that Eq. with a convex function g(θ), with g(θ) > 0 for 0 < θ < 1, g(0) = g(1) = 0, and g ′ (0) = 1, g ′ (1) < 0, one has the same behaviour for v 0 and λ 0 [8]. On the other hand, if the number of particles per unit volume is not very large, the continuous description could not be appropriate. In such a case, one can consider a discrete particle model with N particles whose positions x α (t) evolve according to the Brownian motion dx α (t) dt = √ 2Dη α , α = 1, ..., N, where η is a white noise term. Moreover each particle is characterized by a color C α (t) which determines the particle type. The model is completed by the reaction rule between particles. In order to obtain an autocatalytic reaction one can consider just two types of particles C = 0 (unstable) and C = 1 (stable), that correspond to the species A and B, respectively, with the following dynamics: particles of type 1 always remain 1, and particle 0 changes to 1 with a given probability that depends both on p, the reaction rate, and on how many 1 particles are around it. It is not difficult to realize that in a suitable continuum limit, Eq. (1) gives the evolution of the color concentration of this microscopic system (see Section II). The aim of this work is precisely to study the case in which the density of individuals is small, and therefore the discrete nature of the system can play a role [9,10]. Several approaches have been adopted to investigate the relevance of the correction to the continuum limit. On one side, it has been assumed that the dynamics of the system is given by deterministic macroscopic equations like Eq. (1), and a noise term, of order 1/ √ N, which takes into account the microscopic fluctuations originated by the finite number of particles [11]. On the other side, following the work of Brunet and Derrida [12], this problem has been successfully studied by using a cutoff at the density value 1/N for the continuous field equations. This has been employed to determine corrections to some front properties in FKPP-like equations (see [13] for a review). In particular, it has been shown that the deviation from the continuum value of the front speed is of the order 1/(ln N) 2 , which is rather significant [12]. More recently, Kaneko and coworkers [14] analyzed the dynamics of some chemical reactions, studying the influence of the molecular discreteness. They identify typical length scales in the system which may separate the continuum behavior from the discretenessinfluenced one. They report transitions to a novel state with symmetry breaking that is induced by discreteness, but they do not investigate the features of front propagation properties in terms of the number of particles. A crucial quantity is the so-called Kuramoto length, l K = √ 2Dτ , which is proportional to the front thickness and measures the typical distance over which an unstable particle diffuses during its lifetime (note that τ = 1/p can be interpreted as the average lifetime, i.e., the time particles live before they react). In some situations, especially when there is a propagating front, if the typical distance among particles is much smaller than l K , the concentration of the particles can be regarded as continuous. On the other hand, when there are not many particles within a region of size l K , discreteness effects should be taken into account [14]. In our work we study the interplay between length scales in the problem, our principal aim being to explain the effects of the discrete nature of the system on properties like reaction rate, quenching and front speed. Differently from most of the works in discrete reaction-diffusion systems, we do not consider a lattice model: particles diffusively move in space and interact when their distance is smaller than an interaction radius R, which corresponds to a natural length-scale appearing in many chemical and biological systems [10,15]. We study several properties of the system as a function of R, realising, via comparison of different length-scales, when the effects of discreteness have a dominant role. As expected, the continuum limit is described by the FKPP equation. Nevertheless we remark that in order to have the proper continuum limit it is not sufficient to have a very large density of particles. We discuss the problem in the framework of chemical reaction dynamics, but everything can be thought in the context of population dynamics. The Paper is organized as follows. In the next section we present the particle model for the autocatalytic reaction. In section III we study the renormalized reaction rate of the system when particles of both types are in a closed vessel, initially uniformly random distributed in space. In section IV we study quenching phenomena when B particles can turn into A particles; this causes the emergence of new properties of the model that will be studied in detail. Then, in Sect. V we investigate the front properties of the model (by choosing a proper initial distribution and considering an infinite system in the propagation direction), mainly in terms of the interaction radius of the system. Section VI presents our conclusions. II. MODEL Consider N particles in a two-dimensional box of size L x ×L y . Each particle is identified by its position, x α (t), and its color, C α (t), indicating the particle type. To specify the dynamics it is necessary to give the evolution rule for the position and the interaction rule between particles (chemistry). Space will be considered continuous while time will be discrete (with time step ∆t). However its value, if small enough, it is not relevant. Particle dynamics is synchronous, i.e., all particle properties are updated at the same time. As already mentioned, to model an autocatalytic reaction (3), we consider two kinds of particles: type A particles, C α = 0 (unstable), and type B particles, C α = 1 (stable). The chemical evolution of the particles is given by the following stochastic process: • if C α (t) = 0 then C α (t + ∆t) = 1 with probability P AB = W AB ∆t; The probability (per unit time) W AB depends on the number of stable particles within the interaction radius. In fact, in the continuum limit, the autocatalytic reaction (3) is expected to obey the mass action law dΘ A dt = −pΘ A Θ B , where Θ A and Θ B are the concentrations of particles A and B, respectively, with Θ A + Θ B = 1. The probability that a particle A changes into a B particle is assumed to be where N R (B) indicates the number of B particles within the interaction radius R around the given particle A, N loc (R) is the spatial average number of particles (of any type) in a radius R, and ρ = N/(L x L y ) is the density of particles. We discuss in the following that in a suitable limit the previous probabilistic rule converges to the FKPP equation. Let N(A, t) and N(B, t) be the total number of A and B particles, The dynamics of the number of B particles is given by the discrete stochastic process where k is the index identifying A particles and y k is a discrete random variable which is 1 with probability ∆tW AB (when the particle A changes into a B particle), and is 0 with After a little algebra we obtain where Θ B = E(N(B, t))/N indicates the expected average concentration of B particles. In the case of an infinite number of spatially premixed particles the last term on the righthand-side of the above relation becomes Θ B (t) and we finally obtain the FKPP equation for the homogeneous case: In general, under non-premixed spatial conditions and/or a small density, E(N R (B, t)) πR 2 ρ = Θ B and the system cannot be described by the FKPP dynamics. i) the mean nearest neighbour distance between particles, d m = interaction radius of the model, R, iii) the Kuramoto length scale, l K and iv) the size of the system L. It is expected that the continuum limit is obtained when While the scale separation between d m and L can be easily achieved, in many situations it might happen that the condition R ≪ l K is not verified, or that R is of the same order of d m . In this case the evolution of the system could be very different from that of the continuous FKPP limit. It is the objective of this work to investigate some properties of the model in this regime. Before starting with the discussion of the numerical results, some comments follow about the role of diffusion. Since we introduce the natural length-scale of the interaction, R, a diffusive time related to this distance arises t D (R) = R 2 /D. When this time is much smaller than the reaction time τ = 1/p the system is locally homogeneized before reaction happens. In order to focus on the reaction properties rather than on the diffusive effects we work in the limit t D ≫ 1/p. III. PREMIXED PARTICLES IN CLOSED BASINS Firstly we study the model in a closed vessel, where, as initial condition, particles of both types are premixed and uniformly randomly distributed in space. In such a case, the system evolution necessarily ends with the complete filling of the box with type B particles. Therefore the most significant physical quantity is the filling rate of particles B, which is related to the reaction rate. We proceed by fixing the value of R and varying N in order to explore different situations: a) continuum limit, d m ≪ R; b) the effect of the discreteness, d m R. In this case, at variance with front propagation properties discussed in Sect. V, we will see that the Kuramoto length does not play a fundamental role. The basic reason for this is the spatially random distribution of particles. We adopt periodic boundary conditions on a square domain of side L x = L y = 1; the reaction rate is set to p = 1; and averages are numerically computed over a large number of noise realizations. Since particles are well premixed, the process is spatially homogenous, and we may assume that the growth rate of Θ B (t), (see Eq. 7) is In the case of large particle density one expects that g(Θ B ) = pΘ B (1 − Θ B ), therefore it is natural to assume that for finite N one can replace eq. (9) with where p R (N) is a renormalized reaction rate of the discrete particle model. In this way the evolution of Θ B is given by an FKPP equation with a renormalized (R− and N− dependent) reaction probability, where τ R (N) = 1/p R (N) is the renormalized reaction time for the system. Note that p R (N) contains all of the dependence of our system on the interaction radius and the number of particles and, therefore, it is the proper quantity to study the influence of the discreteness in the model. In Figure 1 it is shown, for a given R, the function g(Θ B ), with the approximation of . Thus looking at the evolution of Θ B = E (N(B, t))/N and using Eq. (11) we have a value of p R (N) which is, in principle, different from the one in Eq. (10). However, the two values are rather close, and in the following we will present results only for the last one, obtained from Eq. (11). As an example, in Fig. 2, we show Θ B (t) versus time obtained from the numerical simulation of the particle model, and the best fit using Eq. (11) from which a value of p R (N) comes out. As previously shown in Figure 1, for large N the value of p R (N) goes to the continuum limit p. In Figure 3, where the renormalized reaction probability versus N is plotted, one sees that the continuum limit, p R (N) = p is, as expected, obtained with good accuracy for large N values. This corresponds to d m ≪ R, while for values of N such that d m is comparable or smaller than R the continuous description becomes inaccurate. More important for our purpose is the behavior of p R (N) versus R. With a fixed total number of particles, N, and a well premixed initial condition, we compute p R (N) varying the interaction radius R (see fig. 4). We observe that in the continuum limit (d m ≪ R) we recover p R (N) = p. For small R, such that d m > R, p R (N) seems to reach a constant value, which is around 30% smaller than the FKPP one. Few words have to be spent about the difference between the large N limit (of Figure 3) and the large R limit of Figure 4. For the problem under discussion one has the same behavior of the continuum limit if d m ≪ R irrespective of the value of the Kuramoto length. For example, in Fig. 4 one has l K ≈ 0.045 which is much smaller than the values of R for which the continuum limit holds. On the other hand, in the study of front properties we will see that the scenario is different and l K can play a relevant role. Now we want to discuss the dependence of the previous results on the chosen initial condition. In Figure 5 we compare p R (N) in the premixed case, and when particles are initially separated in space. Indeed, if the premixed particle condition is relaxed, and the system is initially prepared with different spatial distributions of particles, the renormalized reaction probability significantly changes, since E(N R (B, t)) strongly depends on the particle configuration (see the inset in Figure 5). IV. POSSIBILITIES OF QUENCHING. Studies on the quenching phenomenon [16] shows that in a continuous reaction-diffusion system in presence of an advectin velocity field, and with a reaction term of ignition type, i.e. g(Θ) = 0 for Θ < Θ c , for a suitable size of the "hot" region, there is no propagating front, and the reaction quenches. On the contrary, in a continuous FKPP system (1) quenching phenomena do not appear [17]. Here we show that in a particle description of an FKPP system quenching can occur. Still considering premixed particles in a closed vessel, let us introduce the possibility that a stable particle (B) can turn into an unstable one (A). That is, beyond the autocatalytic reaction (3), we introduce a new reaction where q is its rate. Therefore we have the following reaction rules: • if C α (t) = 0 then C α (t + ∆t) = 1 with probability P AB = W AB ∆t • if C α (t) = 1 then C α (t + ∆t) = 0 with probability Q BA = W BA ∆t W AB is the same of the previous section, while W BA = q does not depend on the interaction radius R, since it is a single particle property. The renormalized description of this model is given by whose solution is . with Λ = p R (N) − q and Θ AS = 1 − q/p R (N). Two different scenarios now appear. If p R (N) < q for all N the reaction finishes. On the other side, when p R (N) > q we have a similar behavior as in the case with q = 0. In fig. 6 we show Θ B vs t for different values of R. It is apparent that for large R the system behaves similarly to the case q = 0 (including the continuum limit for the long time value of the concentration, 1 − p/q, for large R). However, for R small enough the concentration asymptotically vanishes, that is we have a quenching phenomenon. In Fig. 7 where we plot Λ vs R (obtained analogously to Fig. 4, i.e., fitting the analytical solution to the numerical results). For large R we approach the continuum limit and Λ → p − q, while for small R we have quenching corresponding to negative values of Λ. This is a relevant result, entirely due to the role of the interaction radius, R, which reflects the discrete character of the model in the quenching mechanism. Let us note that, at variance with the results in the previous section (which are just quantitative changes with respect to the continuous equation), now the discrete nature of the system is able to produce a feature (the quenching) which is absent in the continuum limit [17]. V. FRONT PROPERTIES In the previous sections we have studied the dynamics of interacting particle systems in a closed container. We now focus on a different configuration, corresponding to well-separated chemicals in an open domain, and we investigate evolution properties, such as front speed and thickness [18,19], in terms of the interaction radius. In this section we take L y = 1, L x = 5, with periodic boundary conditions in the y direction, and rigid walls in the x direction. The burnt (type B) particles are initially concentrated in the leftmost part of the system, so that a propagating reaction front, from left to right, develops. The reaction term we use is just the autocatalytic one (3), i.e., q = 0. We separately study the front speed and the front thickness. A. Front speed We can define the instant front position as and the front speed: which is computed after a transient and before the complete saturation is approached. In We expect that, via the renormalized description of the FKPP equation, that is Eq. (1) with p replaced by p R (N), the front speed of the particle model at varying R should be v 0 = 2 Dp R (N) . We saw that in closed basins different initial conditions on the distribution of particles select different p R (N)'s, see Figure 5. Therefore, for the study of front propagation the proper p R (N) is that one computed in the case of initially separated particles distribution (symbol ( ) in Figure 5). The numerical results, reported in Fig. 9, confirm our prediction at least for small R, i.e., the front speed behaves as in the FKPP case (Eq. (17)). However, the large discrepancy observed for large R cannot be explained by a simple difference in the initial particle distribution. This difference arises because the interaction and therefore the continuum FKPP limit does not hold. Indeed, in particle systems, when R ≤ l k the interaction term establishes a connection between regions containing A particles and regions containing B particles that in the classical FKPP equation could not be connected. Therefore, when R ≥ l K , it is not possible to obtain the continuum FKPP limit (1) even with an arbitrarily large number of particles. In Figure 10 it is shown the front speed at varying R for various N. One can observe that at increasing N for small R the front speed approaches the FKPP value, while for large R the front speed does not depend on N and the value is definitely different from the FKPP value. A simple argument explains the behavior of v f for large R. The front speed is proportional to the front length times the reaction rate, e.g., in the FKPP equation v 0 = 2 √ Dp ∝ p D/p ∝ pl k . When the interaction radius is greater than the Kuramoto length it is reasonable to expect that the front length becomes proportional to R and so the front in agreement with the results shown in Figures 9, 10 and in Figure 11 for various p. In particular, in the inset of Figure 11 one can see the behaviour of α as a function of p: where a is a constant. This is not surprising because p is the continuum limit for the reaction rate which is reached asymptotically by the particle system, i.e., p R (N) → p for large R. (20)), is shown as a function of p. B. Front thickness As a further confirmation of previous results, we investigate the behaviour of front thickness at varying R. Note that in the continuum limit there are many ways to compute the front thickness of a propagating front [20]. In the particle case, however, it is not obvious how to define a front profile. We proceed by defining an averaged field that resembles the front shape. Essentially this is a histogram over particle positions. Fixing our attention on A particles, we defineΘ where N x,∆x (A, t) counts the number of A particles whose x coordinate lays between x and x + ∆x. When the number of particles is large the value of ∆x could be taken arbitrarily small whereas, in general, ∆x has to be small, but at the same time large enough in order to avoid large fluctuations in N x,∆x (A, t). We use a relatively small ∆x (few d m ) and we average N x,∆x (A, t) over many different realizations. The front shape of an FKPP system behaves asΘ where l A is the front thickness, and x f (t) the front position at time t. In Figure 12 it is shown the exponential behaviour of the front profile and the fit obtained from of Eq. (22). In particular Eq. (22) works well in the central region of the front, i.e., where corrections due to the particle nature of the system are less important. Other measurements of the front profile provide similar results. In Figure 13 we plot the front thickness, l A , computed for different values of R. Again, for R smaller than the Kuramoto length the front thickness is constant, while for values of R greater than l k the front thickness behaves as l A ∝ R. This result confirms the assumption of equation (19). The constant value reported in Figure 13 (the dashed line) is only an indicative value to show that for R < l k the front thickness is constant, and it is not the FKPP value of the front width. VI. SUMMARY AND CONCLUSIONS In this work we studied the effects of the discrete-particle character in an autocatalytic reacting system, described in terms of chemical dynamics where two types of Brownian particles interact when they are at a distance smaller than a certain radius R. We have shown that in a suitable continuum limit the system is equivalent to an FKPP model for the concentration of particles, and we have focused on the differences that arise when the conditions for this limit are not fullfilled. The continuum limit holds if some relations among the relevant length scales (the interaction radius, the mean distance between particles and the Kuramoto length) of the problem are satisfied. For well-premixed initial conditions of the particles distribution only the first two lengths play a role. However, for front propagation, which requires a separated initial distribution of particles, also the Kuramoto length is important. We have also considered the modified chemical dynamics A+B p −→ 2B, and B q −→ A, where, at variance with the continuum limit, one can have the possibility of quenching, for small values of R. This is due to the particle nature of the model, which induces a qualitative difference with respect to the continuous description. Moreover, in the context of front propagation, we have shown that particular conditions exist such that, increasing the particle density, the system reaches a continuum limit which is definitely different from the continuum FKPP limit. We conclude noting that many biological systems are characterised by the two main ingredients of our work: the minimal distance for the interaction, and the exiguity of the number of organisms [2,21]. We hope that our work helps to clarify some shortcomings arising when a macroscopic description is attempted. VII. ACKNOWLEDGMENTS We have benefited from a MEC-MIUR joint program (Italy-Spain Integral Actions). C.L. acknowledges support from FEDER and MEC (Spain), through project CONOCE2
Toxicity of Metals to a Freshwater Snail, Melanoides tuberculata Adult freshwater snails Melanoides tuberculata (Gastropod, Thiaridae) were exposed for a four-day period in laboratory conditions to a range of copper (Cu), cadmium (Cd), zinc (Zn), lead (Pb), nickel (Ni), iron (Fe), aluminium (Al), and manganese (Mn) concentrations. Mortality was assessed and median lethal times (LT50) and concentrations (LC50) were calculated. LT50 and LC50 increased with the decrease in mean exposure concentrations and times, respectively, for all metals. The LC50 values for the 96-hour exposures to Cu, Cd, Zn, Pb, Ni, Fe, Al, and Mn were 0.14, 1.49, 3.90, 6.82, 8.46, 8.49, 68.23, and 45.59 mg L−1, respectively. Cu was the most toxic metal to M. tuberculata, followed by Cd, Zn, Pb, Ni, Fe, Mn, and Al (Cu > Cd > Zn > Pb > Ni > Fe > Mn > Al). Metals bioconcentration in M. tuberculata increases with exposure to increasing concentrations and Cu has the highest accumulation (concentration factor) in the soft tissues. A comparison of LC50 values for metals for this species with those for other freshwater gastropods reveals that M. tuberculata is equally sensitive to metals. Introduction Metals are released from both natural sources and human activity. The impact of metals on the environment is an increasing problem worldwide. The impact of metals on aquatic ecosystems is still considered to be a major threat to organisms health due to their potential bioaccumulation and toxicity to many aquatic organisms. Although metals are usually considered as pollutants, it is important to recognize that they are natural substances. Zinc, for example, is an essential component of at least 150 enzymes; copper is essential for the normal function of cytochrome oxidase; iron is part of the haemoglobin in red blood cells; boron is required exclusively by plants [1]. Malaysia, as a developing country, is no exception and faces metals pollution caused especially by anthropogenic activities such as manufacturing, agriculture, sewage, and motor vehicle emissions [2][3][4][5]. Metals are nonbiodegradable. Unlike some organic pesticides, metals cannot be broken down into less harmful components. Managing metal contamination requires an understanding of the concentration dependence of toxicity. Dose-response relationships provide the basis for the assessment of hazards and risks presented by environmental chemicals. Toxicity testing is an essential tool for assessing the effect and fate of toxicants in aquatic ecosystems and has been widely used as a tool to identify suitable organisms as a bioindicator and to derive water quality standards for chemicals. There are many different ways in which toxicity can be measured, and most commonly the measure (end point) is death [1,6,7]. Metals research in Malaysia, especially using organisms as a bioindicator, is still scarce. Therefore, it is important to conduct studies with local organisms that can be used to gain data on metal toxicity, to determine the organism's sensitivity and to derive a permissible limit for Malaysian's water that can protect the local aquatic communities. The freshwater molluscs of the Malaysian region are common, and most extant species are relatively easy to collect. The snails are rich fauna, while bivalve are the second. More than 150 aquatic nonmarine mollusc species have been recorded from the Malaysian region. Melanoisdes tuberculata (Müller 1774) is from class Gastropoda with shells higher than wide (elongate), conical, usually light brown in colour, and it is a cosmopolitan species [8]. M. tuberculata is a species of freshwater snail with an operculum, a parthenogenetic, aquatic gastropod mollusc in the family Thiaridae. The average shell length is about 2 The Scientific World Journal 20-27 mm and this species is native to subtropical and tropical northern Africa and southern Asia (Indo-Pacific region, Southern Asia, Arabia, and northern Australia), but they have established populations throughout the globe. The snail has an operculum that can protect it from desiccation and can remain viable for days on dry land [9]. It is a warm-climate species, prefers a temperature range of 18 to 32 • C, and is primarily a burrowing species that tends to be most active at night. This snail feeds primarily on algae (microalgae) and acts as an intermediate host for many digenetic trematodes. M. tuberculata is a viviparous, gonochoric species with polyploid strains that reproduces by apomictic parthenogenesis. Because meiosis usually does not occur, offspring are identical to their mother. Females can be recognized by their greenish coloured gonads while males have reddish gonads. Under good conditions, females will produce fertilized eggs that are transferred to a brood pouch where they remain until they hatch. M. tuberculata will begin reproducing at a size as small as 5 to 10 mm in length and broods may contain over seventy offspring embryos which develop in the mother [10][11][12]. Molluscs have long been regarded as promising bioindicator and biomonitoring subjects. They are abundant in many terrestrial and aquatic ecosystems, being easily available for collection. They are highly tolerant to many pollutants and exhibit high accumulations of them, particularly heavy metals [13,14]. Little information exists in the literatures concerning the toxic effects of metals for this snail. So far, only a few studies have been reported on metal toxicity to M. tuberculata [15,16] and most of the studies were on the accumulation of metals [14,17,18]. Therefore, the purpose of this study was to determine the acute toxicity of eight metals (Cu, Cd, Zn, Pb, Ni, Fe, Al, and Mn) to the freshwater mollusc M. tuberculata and to examine the bioconcentration of these metals in the body after four days of exposure. Materials and Methods Snails M. tuberculata were collected from canals in the university in Bangi, Selangor, Malaysia. Identification of the species was based on Panha and Burch [8]. Prior to toxicity testing, the snails were acclimatized for one week under laboratory conditions (28-30 • C with 12 h light : 12 h darkness) in 50-L stocking tanks using dechlorinated tap water (filtered by several layers of sand and activated carbon; T.C. Sediment Filter (TK Multitrade, Seri Kembangan, Malaysia)) aerated through an air stone. During acclimation the snails were fed on lettuce. (Table 1). Metal solutions were prepared by dilution of a stock solution with dechlorinated tap water. A control with dechlorinated tap water only was also used. The tests were carried out under static conditions with renewal of the solution every two days. Control and metal-treated groups each consisted of two replicates of five randomly allocated snails in a 500 mL glass beaker containing 400 mL of the appropriate solution. No stress was observed for the snails in the solution, indicated by 100% survival for the snails in the control water until the end of the study. A total of 10 animals per treatment/concentration were used in the experiment and a total of 410 animals were employed in the investigation [42,43]. Samples of water for metal analysis taken before and immediately after each solution renewal were acidified to 1% with ARISTAR nitric acid (65%) (BDH Inc, VWR International Ltd., England) before metal analysis by flame or furnace Atomic Absorption Spectrophotometer (AAS-Perkin Elmer model AAnalyst800, Massachusetts, USA) depending on the concentrations. During the toxicity test, the snails were not fed. The experiments were performed at room temperature of 28-30 • C with photoperiod 12 h light : 12 h darkness, using fluorescent lights (334-376 lux). Water quality parameters (pH, conductivity, and dissolved oxygen) were measured every two days using portable meters (model Hydrolab Quanta, Hach, Loveland, USA) and water hardness samples were fixed with ARISTAR nitric acid and measured by flame atomic absorption spectrophotometer (AAS-Perkin Elmer model AAnalyst 800). Mortality was recorded every 3 to 4 hours for the first two days and then at 12 to 24 hour intervals throughout the rest of the test period. The criterion used to determine mortality were failure to respond to gentle physical stimulation. The death was further confirmed by putting the snail on the glass petri dish for few minutes and if it did not show any movement, it was considered dead. Any dead animals were removed immediately. At the end of day four, the live snails were used to determine bioconcentration of the metals in the whole body (soft tissues) according to the concentrations used. The snails were cleaned with dechlorinated tap water, and soaked in boiling water for approximately 3 min. Tissues of the molluscs were removed from the shell, rinsed with deionized water, and each sample contained three replicates of three to five animals in a glass test tube (depending on how many live animals were left) and was oven-dried (80 • C) for at least 48 hours before being weighed [14]. Each replicate was digested (whole organism) in 1.0 mL ARISTAR nitric acid (65%) in a block thermostat (80 • C) for 2 hours. Upon cooling, 0.8 mL of hydrogen peroxide (30%) was added to the solutions. The test tubes were put back on the block thermostat for another 1 hour until the solutions became clear. The solutions were then made up to 25 mL with the addition of deionized water in 25 mL volumetric flasks. Efficiency of the digestion method was evaluated using mussel and lobster tissue reference material (SRM 2976 and TORT-2, National Institute of Standard and Technology, Gaithersburg, USA and National Research Council Canada, Ottawa, Ontario, 50 ) and concentrations (LC 50 ) for the snails exposed to metals were calculated using measured 4 The Scientific World Journal metal concentrations. FORTRAN programs based on the methods of Litchfield [44] and Litchfield and Wilcoxon [45] were used to compute the LT 50 and LC 50 . Data were analyzed using time/response (TR) and concentration/response (CR) methods by plotting cumulative percentage mortality against concentration and time, respectively, on logarithmic-probit paper. Concentration factors (CFs) were calculated for whole animals as the ratio of the metals concentrations in the tissues to the metals concentration measured in the water. One hundred percent of control animals maintained in dechlorinated tap water survived throughout the experiment. The median lethal times (LT 50 ) and concentrations (LC 50 ) increased with a decrease in mean exposure concentrations and times, respectively, for all metals (Tables 1 and 2). However, the lethal threshold concentration could not be determined since the toxicity curves (Figures 1 and 2) did not become asymptotic to the time axis within the test period. Figures 1 and 2 show that Cu was the most toxic metal to M. tuberculata, followed by Cd, Zn, Pb, Ni, Fe, Mn, and Al. Other studies show different trends of toxicity with different snails. According to Luoma and Rainbow [7] the rank order of toxicity of metals will vary between organisms. With Lymnaea luteola, Khangarot and Ray [28,30] showed that the order of toxicity was Cd > Ni > Zn; with Viviparus bengalensis, Gupta et al. [27] and Gadkari and Marathe [34] found that the order of toxicity was Zn > Cd > Pb > Ni; and with Juga plicifera, Nebeker et al. [20] found that Cu was more toxic than Ni. The present study showed that LC 50 [15] and Mostafa et al. [16] showed that 96 h-LC 50 of Cu to M. tuberculata were 0.2 and 3.6 mg L −1 , respectively, which were higher than the present study. In comparison with other freshwater gastropods (Table 3), this study showed that in general LC 50 s for M. tuberculata were lower or similar compared to other freshwater snails. Direct comparisons of toxicity values obtained in this study with those in the literature were difficult because of differences in the characteristics (primarily water hardness, pH, and temperature) of the test waters. With similar water hardness (soft water) and using adult snails, Nebeker et al. [20] reported that 96 h-LC 50 of Cu for Fluminicola virens was 0.08 mg L −1 , and of Zn for Physa Gyrina was 1.27 mg L −1 , which was lower than the present study. The toxicity reported by other studies (Table 3) differs from that reported in this study owing to the different species, ages, and sizes of the organisms as well as varied test methods (water quality and water hardness) as this can affect toxicity [46][47][48][49]. In the present study, the water hardness used was considered low (18.7 mg L −1 CaCO 3 ), and the water was categorized as soft water (<75 mg L −1 as CaCO 3 ). In comparison with other taxa, M. tuberculata shows less sensitivity to metals. LC 50 s reported for other taxa from this laboratory such as Crustacea (prawn Macrobrachium lanchesteri [50] and ostracod Stenocypris major [51]), fish (Rasbora sumatrana and Poecilia reticulata [52]), and Annelida (Nais elinguis [53]) were lower than the LC 50 values of M. tuberculata in the present study. Von Der Ohe and Liess [54] showed that 13 taxa belonging to Crustacea were among the most sensitive to metal compounds and concluded that taxa belonging to Crustacea are similar to one another and to Daphnia magna in terms of sensitivity to organics and metals and that Molluscs have an average sensitivity to metals. Mitchell et al. [9] reported that the snail has a tightly sealing operculum that allows it to withstand desiccation and apparently also increases its tolerance to chemicals. Bioconcentration of Cu, Cd, Zn, Pb, Ni, Fe, Al, and Mn in surviving M. tuberculata is as shown in [18] on Cd and Zn accumulation by two freshwater gastropods (M. tuberculata and Helisoma duryi). Hoang and Rand [55] showed that whole body Cu concentration of juvenile apple snails (Pomacea paludosa) was significantly correlated with soil and water Cu concentrations. In other experiments, Hoang et al. [56] showed that 6 The Scientific World Journal The Scientific World Journal whole body Cu concentrations of juvenile snails (P. paludosa) increased with exposure time and concentration and reached a plateau (saturation) after 14 days of exposure. These results are in agreement with the statement of Luoma and Rainbow [7] who state that the uptake of trace metals from solution by an aquatic organism is primarily concentration dependent. The higher the dissolved concentration of the trace metal, the higher the uptake of the metal from solution into the organism will be, until the uptake mechanism becomes saturated. The present study shows that in general the highest concentration factor (CF) was noted for Cu (988), Pb (169), and Zn (132), and the lowest CF was for Al (0.07) (Figure 3). Similar results were reported by Lau et al. [14] who reported that M. tuberculata collected from the wild (Sarawak River) accumulated higher amounts of Cu, Zn, and As in the soft tissues compared to other metals. Adewunmi et al. [17] showed that Cu, Pb, and Cd were the highest metal accumulated in tissues of freshwater snails in dams and rivers in southwest Nigeria, and metal concentrations in the snails were varied with the seasons, especially for Cu which was higher in the dry season compared to the rainy season. According to Luoma and Rainbow [7] the factors that affect the rate of uptake of metals affect the toxicity of metal. This is in agreement with the results from the present study which shows that Cu, which was the most toxic to the snail, also has the highest CF in the soft tissues of M. tuberculata. In explaining the toxicity of Cu, Hoang and Rand [55] demonstrate that the potential toxicity of Cu carbonate to snails may be explained by the carbonate content in the snails. The carbonate requirement for snails is more than for fish because snails require it for shell development. Copper may enter snails as Cu carbonate. After entering snails, Cu carbonate may be disassociated through biological and chemical reactions. Carbonate would be available for shell development and Cu would be accumulated in soft tissue. Hoang et al. [56] also reported that with the juvenile apple snail (Pomacea paludosa), most of the accumulated Cu was located in soft tissue (about 60% in the viscera and 40% in the foot) and the shell contained <4% of the total accumulated copper. However, a comparison of the uptake rate in aquatic organisms showed that in general the order of the uptake rate constant is Ag > Zn > Cd > Cu > Co > Cr > Se [7]. This discrepancy is probably due to short time of exposure (four days) to metals in this study. Other factors which may influence the bioaccumulation of heavy metals in 8 The Scientific World Journal aquatic organisms has been suggested, such as their feeding habit [57], growth rate and age of the organism [14,58], and the bioavailability of the metals, which greatly depends on hardness of water, pH, and the acid-volatile sulphide of the water [59]. Hoang and Rand [55] showed that the apple snails (Pomacea paludosa) accumulated more Cu from soil-water than from water-only treatments and this suggests that apple snails accumulate Cu from soil (-sediment)/water systems. Organisms with higher growth rates also usually have lower metal concentrations in their bodies as the rate of increase in the weight of its tissue and shell will be higher than the accumulated metals [14]. According to Lau et al. [14], the shell of M. tuberculata would be most suitable for monitoring Cu in the aquatic environment, which has an approximately thirtyfold magnification capability and with standard errors of less than 10%. Zn would be best monitored by using the shell of M. tuberculata, whose magnification capability was approximately 35 times and its error was at approximately 15%. Both tissue and shell of M. tuberculata could also be used for monitoring arsenic as it has good magnification capabilities with moderate irregularity approximately 23%. However, it is important to note that the Lau et al. [14] study was conducted in the field (longterm exposure), while the present study was conducted in the laboratory with short-term exposure, and differences in accumulation trend and strategies (higher accumulation in soft tissues or shell) may exist. Aquatic molluscs possess very diverse strategies in the handling and storage of accumulated metals, which include being in the forms of metal-rich granules metallothioneins (MT) or metallothionein-like proteins [60][61][62]. Accumulation strategies of invertebrates vary intraspecifically between metals and interspecifically for the same metal in closely related organisms [62,63]. Moolman et al. [18] showed that M. tuberculata had a much higher uptake of Zn in the Zn and in the mixed Cd/Zn exposures compared to Helisoma duryi, and Zn was readily accumulated with increasing metal concentrations. Lau et al. [14] also demonstrated that Zn concentrations in M. tuberculata were significantly higher than those in the molluscs Brotia costula and Clithon sp. The present study shows that the CF of Zn was higher than the Cd in the soft tissues of M. tuberculata. With the juvenile apple snail, Hoang et al. [56] showed that the snails accumulated Cu during the exposure phase and eliminated Cu during the depuration phase. Metals accumulated in animals can be stored without excretion leading to high body concentrations (accumulators), or the metal levels in the body can be maintained at a low constant body concentration (regulators) by balancing the uptake with controlled rates of excretion [64]. Conclusions This study showed that M. tuberculata was equally sensitive to metals compared to other freshwater gastropods. Cu was the most toxic metal to M. tuberculata followed by Cd, Zn, Pb, Ni, Fe, Mn, and Al. A comparison of the bioconcentration of metals in soft tissues of M. tuberculata showed that among the eight metals studied; Cu, Pb, and Zn were the most accumulated and Al was least accumulated. M. tuberculata is widely distributed in urban and suburban areas which makes it easy to sample and very useful in ecotoxicology studies. This study indicates that M. tuberculata could be a potential bioindicator organism of metals pollution and in toxicity testing.
Determinants of the type of health care sought for symptoms of Acute respiratory infection in children: analysis of Ghana demographic and health surveys Background Globally, acute respiratory infection (ARI) is a leading cause of infant and childhood morbidity and mortality. Currently, it is estimated that 50 million cases of childhood ARI are untreated. In this study, we identified determinants of the type of treatment sought for symptoms of childhood acute respiratory infection (ARI), including non-treatment, amongst a nationally representative sample of children under five years in Ghana. Methods In total, 1 544 children were studied by a secondary analysis of pooled survey data from the 1993, 1998, 2003, 2008, and 2014 Ghana Demographic and Health Surveys (GDHS). Cross-tabulations, chi-square, multinomial logistic regression, and Bayesian hierarchical spatial logistic regression analyses were used to identify relationships between the type of treatment sought and maternal socio-economic and household characteristics. Results Seeking medical care was significantly associated with child age (RRR= 1.928, 95 % CI 1.276 – 2.915), maternal employment status (RRR = 1.815, 95 % CI 1.202 – 2.740), maternal health insurance status, (RRR = 2.618, 95 % CI 1.801 – 3.989), children belonging to middle (RRR = 2.186, 95 % CI 1.473 – 3.243), richer (RRR = 1.908, 95 % CI 1.145 – 3.180) and richest households (RRR = 2.456, 95 % CI 1.363 – 4.424) and the 1998 survey period (RRR = 0.426, 95 % CI 0.240 – 7.58). Seeking self-care or visiting a traditional healer was significantly associated with maternal educational status (RRR = 0.000, 95 % CI 0.000 – 0.000), and the 1998 (RRR= 0.330, 95 % CI 0.142 – 0.765), 2003 (RRR= 0.195, 95 % CI 0.071 – 0.535), 2008 (RRR= 0.216, 95 % CI 0.068 – 0.685) and 2014 (RRR= 0.230, 95 % CI 0.081 – 0.657) GDHS periods. The probability that the odds ratio of using medical care exceeded 1 was higher for mothers/caregivers in the Western, Ashanti, Upper West, and Volta regions. Conclusions Government policies that are aimed at encouraging medical care-seeking for children with ARI may yield positive results by focusing on improving maternal incomes, maternal NHIS enrolment, and maternal household characteristics. Improving maternal education could be a positive step towards addressing challenges with self-care or traditional healing amongst children with ARI. estimated that 80 % of all ARI-related deaths among children under 5 years of age occur in developing countries, making it a leading cause of infant mortality in these countries [3]. Symptoms of ARIs include respiratory rate greater than or equal to 70 breaths per minute, severe chest wall retractions, cough and inability to feed and drink [4]. Children are particularly vulnerable to ARIs, because of their inability to adequately protect themselves from the associated environmental risks, their relatively immature immune systems and physical development [4]. Children under five years are not usually able to seek health care themselves; they must rely on adults for this. This responsibility usually falls to the mother or other female caregivers in the family [5]. With differing sociocultural roles in the home, males are often regarded as breadwinners whereas females are seen as homemakers, which includes taking care of the children [5]. Securing child health depends on appropriate health care-seeking. Insight from existing studies indicates that such behaviour is associated with a variety of factors including maternal education, maternal age, household size, maternal ethnicity, and household socio-economic status [6][7][8][9]. Distance from health care facilities is known to play a key role. For instance, a study in rural Tanzania found that mothers who lived one or more kilometers from a health center were less likely to access health care [8]. Mothers were also more likely to seek health care for children aged 24 months or younger [8,10]. Some studies also suggest that mothers seldom seek health care for their child in the early stages of a child's illness [6,10]. This practice would have negative implications for child health. Some studies indicate that over 30 % of child deaths can be attributed to late care-seeking [11]. Similar findings have been reported in Ghana, where ARIs accounted for 20 % of all annual deaths among children under five years [12]. Recent studies have also shown an association between socio-demographic and cultural factors such as distance from facilities, income, ethnicity and household size and access to health insurance to be associated with maternal health care seeking in Ghana [13,14]. Another study also showed that child mortality and health-seeking behavior were a function of social factors such as maternal education, place of residence, and family income [15]. The healthcare system in Ghana can be considered to be pluralistic in that it is characterized by two parallel systems: the Traditional and the Orthodox (or medical) systems [16][17][18][19][20]. The traditional system is the oldest and most widely used care system due to its accessibility, acceptability, affordability, and availability [16,17]. It includes the use of indigenous knowledge, spiritual therapies, herbal medicines, manual techniques, and in some cases, modern medical equipment to diagnose and treat ailments [20][21][22]. Traditional medical practice in Ghana is regulated by the Traditional Medical Practice Council which gains its mandate from the Traditional Medical Practice Act 575, 2000 [23]. Despite the efforts by the council, unlicensed practitioners and others who practice in secrecy hinder the proper regulation of traditional medical practice [24]. The use of traditional medicine is more widespread in rural rather than urban areas, partially due to the skewed availability of modern health care facilities [25,26] but mainly due to cultural norms, the desire to be part of the healing process, perceived displeasure with the medicalization of western medicine, and perceptions on the quality of care [27]. Orthodox medicine, by contrast is characterized by the use of scientific methods and principles to arrive at a diagnoses and treatment of diseases [18,21,22]. In Ghana, adherents to orthodox medicine, seek care in health care facilities that are either private or publicly owned. These include hospitals, clinics, polyclinics, pharmacies, health centers, and Community-Based Health Planning and Services (CHPS) compounds. The inequitable spatial distribution of personnel and resources for orthodox medicine and corresponding access to these facilities has been reported [20]. A study in 2011 suggested that, overall, whereas the ratio of traditional medical practitioners to human population stood at 1:200, orthodox doctor-population ratio stood at 1: 20,000 [28]. Studies on maternal health care seeking for childhood ARI, especially in the Ghanaian context, are very limited. Existing studies have mainly focused on the determinants of ARI among children, with limited or no focus on careseeking behavior among the mothers [29][30][31]. The findings of these studies show that maternal and household socioeconomic factors are significantly associated with the ARI symptoms among children under age five. There is also a limited knowledge on the temporal trend in careseeking for ARI, given that policies such as the Child Health Policy 2007-2015, Community-based Health Planning and Services (CHPS) policy, and Free Maternal Health Policy, have been introduced in the healthcare sector in the last three decades [30][31][32][33]. To the best of our knowledge, studies on maternal health care seeking for childhood ARI have not considered a recent nationally representative survey of children in Ghana. We, therefore, contribute to this growing body of literature by reporting on the determinants of care-seeking for childhood ARI using data from five successive National Demographic and Health Surveys conducted in Ghana. Knowledge of the determinants would be useful for planning interventions that could help improve health care-seeking and ultimately secure child health. This is most important, given that ARIs are one of the leading causes of morbidity and mortality among children under five years in Ghana and internationally [1,3,12]. Additionally, such information would assist in making policy decisions on the attainment of Sustainable Development Goal 3; Good health and well-being. The objectives of the study are two-fold. First, the study aims to assess the association between socioeconomic factors and the type of treatment sought for ARI. Secondly, the study explores the specific effect of place of residence on seeking medical care for childhood ARI symptoms using a Bayesian hierarchical spatial logistic regression. Data and methods The study used data from the following Ghana Demographic and Health Survey (GDHS) over the years: 1993, 1998, 2003, 2008, and 2014 [34-38]. The GDHS is a nationally representative survey of women aged 15 to 49 years and men aged 15 -59 years. Its main objective is to capture information on fertility, maternal and child health as well as family planning and attitudes towards HIV/AIDS and other sexually transmitted infections [32]. Respondents for the surveys were selected through a two-stage sampling process. The design used 20 sampling strata from stratification of each of the ten administrative regions into urban and rural areas. The first stage of the sampling process involved selecting census enumeration areas (EAs) in each stratum. The probability of selection of each EA was proportional to the size of the EA -that is, the number of residential households in the EA. In the second stage, households were randomly selected within each EA, and all women (aged 15-49 years) who were members of the household or who had spent the night before the survey in the house were interviewed. The birth history and health information of children born to eligible women in the last five years before the survey were collected as part of the data. This information was kept in the child recode dataset -the data used for this study. Detailed descriptions of the GDHS surveys and the sampling methods are available in the final reports for the surveys [32][33][34][35][36]. The sample for this study was children under age five who were ill with acute respiratory infections (ARI). The samples of children under age five with ARI in the successive GDHS used in this study were for Measures The outcome variable for this study was the type of treatment sought for children with ARI symptoms. In the GDHS, ARI among children was identified by asking mothers and caregivers of children under age five whether their children had been ill with a cough accompanied by short and rapid breathing in the two weeks before the survey. The mothers and caregivers were then asked if they had sought treatment for the ARI symptoms and where the treatment had been sought. For our outcome variable, children with ARI symptoms whose mothers or caregivers did not seek treatment were categorized into a single group and coded as "0 = No treatment sought". We operationally defined seeking medical care as a mother or caregiver seeking an expert opinion or treatment from a public or private hospital or clinic, outside the home, for a child who shows symptoms of ARI. Therefore, children whose mothers or caregivers sought treatment in public health care facilities (such as a government hospital, government health center/clinic, government health post or CHPS) or at private health care facilities (e.g. private hospital, private clinic, private doctors, mobile clinic and maternity home), but excluding a pharmacy or drug store, were classified into a single group and coded as "1 = Sought medical care". Children whose mothers or caregivers sought other sources of treatment, such as 'pharmacy or drug store' , 'traditional healer' and 'drug peddler' , were classified and coded as "2 = Self-care or sought traditional treatment". The independent variables were child demographic characteristics, maternal socioeconomic status, household characteristics, and place of residence. The age and sex of the child were included as child demographic characteristics. Maternal socioeconomic and demographic characteristics were age, marital status, religion, education, employment, and health insurance coverage. Maternal education was grouped into four categories and coded as: "0 = No formal education", "1 = Primary education", "2 = Secondary or high school education", and "3 = Higher education". The responses for maternal employment status were derived from the GDHS question that asks respondents whether they had been employed during the 12 months prior to the survey. Respondents who indicated they had worked in the past year, were currently working and those with employment, but on leave were grouped into a single category and coded as "1 = Employed"; while those who indicated they had not worked in the past year were coded as "0 = Unemployed". Another maternal characteristic captured was health insurance status -that is, whether the mother or caregiver was covered by health insurance -with a binary variable coded as "0 = Uninsured" and "1 = Insured". Household characteristics used as possible predictors of type of treatment sought for ARI symptoms were: household wealth index (a DHS construct based on assets ownership and housing characteristics of each household) and sex of household head. Households were classified as poorest, poorer, middle, richer, and richest under the household wealth index. Place of residence was also included as an independent variable; it was categorized and coded as "0 = urban" or "1 = rural". Finally, a variable capturing the GDHS periods or years was also included as an independent variable, with the 1993 GDHS as the reference category. Analysis Univariate, bivariate, and multivariate analyses were performed concerning the objectives of this study. We first conducted a cross-tabulation analysis to examine the distribution of the study sample characteristics by type of treatment sought for children with ARI symptoms. We performed a Pearson Chi-square test of independence to identify any association between the predictor variables and the type of treatment sought for children with ARI symptoms; while the Cramer's V test was used to determine the strength of the association. Next, we conducted a multinomial logistic regression analysis to examine the effect of maternal socioeconomic status and household characteristics on the type of treatment sought for children with ARI symptoms among children under 5 years of age. The outcome variable of this study was a nominal variable with three categories hence the choice of multinomial logistic regression modeling. To facilitate easy interpretation of the result, we report the relative risk ratio (RRR) of the multinomial logistic regression model -which is the ratio of the probability of choosing medical treatment and self-care or using a traditional healer over the probability of not seeking treatment (the baseline category). The cross-tabulation and multinomial logistic regression were conducted using STATA version 16 [37]. We controlled for the survey design effects using the 'svyset' command in STATA to adjust for the sampling clusters and weights. We also estimated the design effect to provide insight into the efficiency of the sample used in this study. The first design effect -Deff -is a ratio of the variance estimate from our sample and the variance estimate from a hypothetical sample of the same size drawn as simple random sampling (SRS). A Deff value greater than 1 implies the study sample is more efficient than SRS. The second design effect estimate -Deft -is the ratio of the standard errors in the study sample and in the SRS. The final analysis was a Bayesian hierarchical spatial logistic regression to account for potential spatial dependence among the administrative regions used in the sampling process. The sampling method used in the GDHS means the data from the surveys are geographically distributed data. Due to the sampling approach, traditional regression models such as the multinomial logistic regression model used in the previous analysis do not account for potential spatial dependence. The concept of spatial dependency is based on Tobler's First Law of Geography, which states, "everything is related to everything else, but near things are more related than distant things" [38]. This concept violates the independently distributed observations and error assumptions of regression models. To address this issue, we included a spatial random effect term in our Bayesian model to account for potential spatial dependency between the administrative regions. In the modeling, we assume a Besag-York-Mollie (BYM) specification. BYM specification proposed by accounts for both smoothened spatial structure of the data based on the concept of spatial dependency (spatial autocorrelation) and an unstructured spatial effect. The Unstructured spatial effect is based on the assumptions that the effect of the spatial units or administrative regions may be independent of neighboring units or regions -independent region-specific noise (unstructured spatial effect) [39]. The BYM modeling specification, thus addresses the issue of potential bias where there is no spatial dependency [39]. For the Bayesian hierarchical spatial logistic regression, we created a dummy variable out of the original outcome variable and coded it as "1 = sought medical treatment" and "0 = did not seek treatment or self-care or sough traditional healer". The Bayesian hierarchical spatial logistic regression was implemented in the open-access R software [40] using the R-INLA package [39][40][41][42][43]. We visualized the results for the region-specific odds ratio of using a medical treatment for ARI and the posterior probability that the odds ratio of using a medical treatment for ARI symptoms exceeds one (exceedance probability) in R software using the 'tmap' and 'tmaptools' packages [41]. Over 50 % of mothers were less than 30 years old. Moreover, 75 % of mothers were uninsured with NHIS with most living in male-headed households (74.81 %). Over 50 % of children were 2 years of age or younger. The majority of children with ARI symptoms also lived in a male-headed household (74.81). Among the different regions, the northern region contributed the most cases of ARI symptoms, accounting for 15.54 % of total cases. Table 2 presents the results of the cross-tabulation analysis, Chi-square test, and Cramer's V test. The chi-square test of independence shows that region of residence, rural/urban residence, and the maternal characteristics (age, religion, education, employment, health insurance coverage), household wealth and GDHS period were significantly associated with ARI symptoms (p < 0.05). The Cramer's V test reveals maternal health insurance status was moderately associated with the type of treatment sought for children's ARI symptoms (V>=0.20). The remaining independent variables were weakly associated with the type of treatment sought for children's ARI symptoms (V < 0.20). The cross-tabulation distribution shows that the Volta region had the highest proportion (71.18 %) of mothers who did not seek any form of treatment for children with ARI symptoms. On the other hand, the regions with highest proportion of mothers seeking medical treatment for children with ARI symptoms were Upper West (48.75 %) and Upper East (45.38 %) regions. The result shows that mothers did not seek care for approximately 62 % of children Table 3 for the multinomial logistic regression model shows that child age, maternal employment status, maternal health insurance status, household wealth, and GDHS period are significantly associated with choosing medical care over not seeking treatment. Descriptive characteristics Only maternal education and GDHS period were significant predictors of choosing self-care or seeking a traditional healer over not seeking treatment for ARI symptoms among children aged under 5 years. Mothers of children aged 1 year old were 93 % (RRR=1.926, p < 0.01) more likely to use medical treatment for ARI symptoms over not seeking treatment. The relative risk ratio for choosing medical care over not seeking treatment was 1.815 times (p < 0.01) greater among employed mothers of children with ARI symptoms than unemployed mothers, adjusting for other variables in the model. Likewise, insured mothers of children with ARI symptoms had a greater relative risk ratio for choosing medical care over not seeking treatment (RRR=2.681, p < 0.001). Children with ARI symptoms living in middle, richer and richest households had a greater relative risk ratio of using medical care compared to those living in the poorest households. The relative risk ratio for choosing medical care was 0.426 times (p < 0.01) lower among children in the 1998 GDHS, compared to those in the 1993 GDHS. The relative risk ratio for self-care or seeking a traditional healer over not seeking treatment was greater among mothers with higher education relative to that among uneducated mothers. However, the magnitude was less negligible (RRR=0.000, p < 0.000). The relative risk ratio for using self-care or seeking a traditional healer was lower among children in all the GDHS survey periods compared to the reference GDHS year (1993 GDHS). Except for participants who did not belong to any religious denomination and participants from the 2014 GDHS, the design effect estimates (DEFF) show that the sample for all response categories was more efficient than SRS. Table 4 displays the results of the Bayesian hierarchical spatial logistic regression model. The results indicate mothers of children aged 1 year were more likely (MOR=1.790) to seek medical care for ARI symptoms, compared to those with children aged 4 years. Mothers practiced traditional religion were less likely to seek medical care for children with ARI symptoms, compared to Christian mothers. Children of employed and insured mothers, as well as those living in non-poorest households, were more likely to use medical care compared to not seeking treatment or self-care or using traditional treatment. The model also shows that 1 % of the variability in seeking medical over not seeking treatment or >self-care or using traditional treatment could be attributed to the mother's region of residence. Figure 1 shows the result of the spatial effects or regionspecific odds ratio for using medical care for ARI symptoms among children and their probability of exceeding 1. The region-specific effect shows that mothers in the Western, Ashanti, Upper West, and Volta regions have stronger positive effects or increased odds ratio of using a medical treatment for ARI symptoms among children under age 5 years. The opposite effect was observed for mothers in the Upper East, Northern, Brong-Ahafo, Eastern, Central, and Greater Accra regions of the country. Likewise, the figure also reveals the probability that the odds ratio of using medical care for ARI symptoms exceeds 1 is higher for caregivers in the Western, Ashanti, Upper West, and Volta regions. Discussions We found that less than half (43 %) of mothers sought treatment for their children with the onset of ARI symptoms. The majority of mothers in our study-approximately 57 % -did not seek any form of treatment for children with ARI symptoms. Studies in Kenya, Egypt, Tanzania, and Ethiopia [7,8,10] found that some mothers visited public health centers when the illness of the children got worse. So our finding may be partly explained by some mothers/caregivers possibly having avoided seeking care unless their child's symptoms got worse. Delays in seeking health care may have also been due to an inability to pay the cost of such a visit. A study in Ecuador suggests that the lack of money for medicines was an obstacle to seeking timely health care amongst female caregivers [42]. In both multinomial and bayesian hierarchical logistic regression models, child age, maternal employment status, and maternal health insurance status were identified as determinants of health care seeking. The GDHS period was statistically significant in the multinomial model only. Maternal religion, was also statistically significant in the bayesian hierarchical model. Seeking medical care for ARI symptoms was significantly associated with child age, maternal employment status, household wealth, and national health insurance status. On the other hand, seeking self-care or a traditional healer was significantly associated with maternal education and the GDHS period. Compared to children aged 4 years, those aged 1 year or less were more likely to have received medical care. This may be partly explained by these mothers having possibly already taken their children to medical centers for post-natal clinics and immunizations during this period. This finding is consistent with those from a study in Tanzania, which found children aged 2 years and older to be less associated with medical careseeking but rather associated with receiving no care or receiving care at home [9]. The findings suggested that maternal religion played a key role in healthcare-seeking. Mothers who were adherents to traditional religion were significantly associated with a lesser likelihood of seeking medical care compared to their Christian counterparts. Over 70 % of mothers belonging to traditional religion did not seek any form of treatment with the onset of ARI symptoms. Maternal employment status had a positive relationship with seeking medical care. A mother was more likely to seek medical care for her child's ARI if she had been employed. Employment is crucial in this regard, as it means that mothers can earn income to support household expenditure. Access to health centers may involve traveling some distance thereby requiring some form of transportation. Incomes accrued through a mother's employment would, therefore, provide for the cost associated with accessing and using these services. This idea is supported by a study in Kenya, in which inadequate finances were associated with failure to seek health care outside the home [10]. Along similar lines, we also identified household wealth as a significant determinant of medical care. Mothers or caregivers of children living in households that belonged to the middle wealth quintile or higher were more likely to seek medical care, compared to others who belonging to lower quintiles. This finding is consistent with studies in Kenya, Tanzania, Ethiopia, Ecuador and Mongolia [6,8,42,43], where significant relationships were found between household wealth and health care-seeking. Maternal health insurance registration status was identified as a statistically significant determinant of health care-seeking. Mothers were more likely to seek medical care for ARI symptoms if they had registered with the National Health Insurance Scheme (NHIS). Given that our study population was largely rural (73 %) and largely uninsured (76 %), our findings emphasize the important that the NHIS plays in promoting maternal health careseeking amongst the rural population in Ghana. Another study has also suggested that mothers whose education was primary school or less and who were from poor households were significantly less likely to be among persons insured under the NHIS [14]. We found lower educational attainment to also be significantly associated with seeking self-care or traditional healing, whereas higher educational attainment was associated with seeking medical care. This finding corroborates earlier studies in Kenya, Tanzania and Ethiopia [6,8,9], which suggested that higher educational attainment may have accounted for increased knowledge of symptoms and seeking medical attention at a health facility outside the home. The GDHS period was significantly [15,19,22]. We observed that mothers living in the Upper East, Northern, Brong-Ahafo, Eastern, Central, and Greater Accra regions had lesser odds of seeking medical care for their children. Also, mothers in rural areas were less likely to seek medical care compared to their urban counterparts. A disparity in the spatial distribution of medical health facilities in urban areas relative to rural areas in Ghana may have been a contributory factor in this finding [14,19,25,28]. As suggested by a study in Kenya, government hospitals are where medical care is often received, but they may have been relatively far away from where people live and have been costly [6]. Additionally, mothers may have had to travel (walking or paying for transport) longer distances compared to their urban counterparts to seek health care. This is an important finding, given that our study population living in the southern parts of the country had a higher exceedance probability of using medical care relative to those in the northern parts. Our study should be viewed in light of the following limitations. First, our operational definition of medical care excluded 'Pharmacy or drugstore' but some pharmacies or drug stores might, nevertheless, have provided medical care to our study population. Accordingly, our findings should be interpreted in that context. Second, mothers provided answers based on a two-week recall Fig. 1 Spatial effects and the exceedance probability of using medical care. Generated with R software using the tmap (version 3.3-2) and tmaptools (version 3.1-1) packages of the health status of their children, and some of their answers may have been subject to recall bias. Third, a longitudinal study design may have afforded greater insight into the contributions of determinants across the wet and dry seasons in Ghana. Fourth, the cross-sectional design affords a snapshot of maternal healthcareseeking and does not afford an interpretation of causality. Fifth, the mother's proximity to health centers and the time taken to access health services were not measured quantitatively. Furthermore, our study does not capture the quantity and quality of health care infrastructure and health care delivery over time, as this was outside the scope of the study. Despite these limitations, our study has some strengths. It represents the first attempt to pull together GDHS data sets to assess determinants of health care-seeking for children with ARI symptoms. It therefore contributes to the growing body of literature on the determinants of health care seeking for childhood ARI amongst mothers/caregivers. Conclusions Our study found that the type of health care sought for symptoms of childhood ARI was determined by maternal socioeconomic and household characteristics. We identified the child's age, maternal employment status, household wealth, maternal health insurance status and the GDHS period as significant determinants of choosing medical care. The choice for self-care or traditional healer was significantly associated with maternal education status and the GDHS period. Our findings suggest that interventions aimed at improving maternal socioeconomic and household conditions may improve medical care-seeking for childhood ARI symptoms. Policies aimed at improving maternal employment opportunities, improving maternal education on the symptoms and management of childhood ARI and policies geared towards encouraging mothers to enrol in the National Health Insurance Scheme may also yield positive effects for medical healthcare-seeking. Findings from our Bayesian spatial model suggest that there is a need to reduce regional disparities in socioeconomic indicators and the distribution of health care resources. Future studies could shed more light on the roles that sociocultural, religious, and physical disability factors play in health care-seeking for childhood ARI symptoms. In general, the government and its stakeholders should strengthen efforts at improving the national socioeconomic and health systems to overcome the problem of acute respiratory infection in children.
Long-Term Impact of Single Epilepsy Training on Knowledge, Attitude and Practices: Comparison of Trained and Untrained Rwandan Community Health Workers Objectives: To close the epilepsy treatment gap and reduce related stigma, eradication of misconceptions is importantIn 2014, Community Health Workers (CHWs) from Musanze (Northern Rwanda) were trained on different aspects of epilepsy. This study compared knowledge, attitude and practices (KAPs) towards epilepsy of trained CHWs 3 years after training, to untrained CHWs from Rwamagana (Eastern Rwanda). Methods: An epilepsy KAP questionnaire was administered to 96 trained and 103 untrained CHWs. Demographic and intergroup KAP differences were analysed by response frequencies. A multivariate analyses was performed based on desired and undesired response categories. Results: Epilepsy awareness was high in both groups, with better knowledge levels in trained CHWs. Negative attitudes were lowest in trained CHWs, yet 17% still reported misconceptions. Multivariate analysis demonstrated the impact of the training, irrespective of age, gender and educational level. Knowing someone with epilepsy significantly induced more desired attitudes. Conclusion: Despite demographic differences between trained and untrained CHWs, a single epilepsy training resulted in significant improvement of desired KAPs after 3 years. Nation-wide CHW training programs with focus on training-resistant items, e.g., attitudes, are recommended. INTRODUCTION Epilepsy is a common chronic neurological disorder affecting people of all ages. Up to 80% of the 70 million people living with epilepsy (PwE) in the world live in low-and middle-income countries [1]. In sub-Saharan Africa (SSA), epilepsy is associated with a high "triple" burden of disease. High prevalence is the first burden, often secondary to treatable or avoidable causes such as infectious diseases, limited perinatal care, and traumatic brain injury [2][3][4][5][6]. In a meta-analysis of SSA community-based door-to-door surveys the prevalence of active epilepsy is estimated at 9.0‰ and lifetime epilepsy at 16‰, which is much higher compared to High Income Countries or Latin America and Asia [7]. In Rwanda, prevalence of 49‰ reported in 2005 greatly exceeds the SSA prevalence and was recently re-confirmed by a door-todoor survey in three villages of Musanze district in the Northern province in 2017 [5]. A second burden is the epilepsy diagnosis gap and epilepsy treatment gap, due to a healthcare infrastructure with limited resources and inadequate access to antiepileptic drugs. Closing the epilepsy diagnosis gap is hampered by a limited number of neurologists, epilepsy-trained staff and lack of access to EEG investigations and imaging [8]. The epilepsy treatment gap in SSA is estimated at 68.5% (95% CI: 59.5-77.5%), double in rural compared with urban regions [9,10]. The epilepsy treatment gap in Rwanda in 2005 was 67.8% [5]. The third burden relates to various forms of social discrimination and stigma, even more important in vulnerable groups, such as women and children in remote areas. In several African communities the myths and beliefs regarding epilepsy impact the epilepsy treatment gap [11][12][13][14]. Misconceptions about epilepsy result in negative social, psychological and economic consequences, such as fear, humiliation, social and work exclusion [12,[15][16][17][18][19]. To overcome these barriers, CHWs may play a key role in mobilization of patients, referral to primary healthcare facilities and provide education. Engagement programs with CHWs in Rwanda have been critical in turning around the burden of malaria and maternal death, demonstrating CHW's undisputed value [20,21]. As CHWs are recruited amongst villagers, there is a need to confirm that misconceptions and beliefs have been altered and, more importantly, persist on the long term after training on epilepsy. The Rwandan Organisation Against Epilepsy (ROAE), a chapter of the International League Against Epilepsy (ILAE), engaged with Community Health Workers (CHW) to increase epilepsy diagnosis and referral at the grassroot community level by providing epilepsy training courses in the Northern Province. This article discusses difference in knowledge, attitude and perception (KAP) parameters in CHWs trained by the ROAE compared to CHW not having been exposed to an epilepsy training, at least 3 years after the initial training. Study Setup This cross-sectional study was conducted in a cluster of semiurban and rural villages in Musanze district, Northern Province, where the ROAE had performed training courses to CHWs. A cluster of villages in the eastern rovince from the Nyakariro sector, Rwamagana district, Eastern Province, was selected as a control area as no known epilepsy training or awareness programs had been conducted before by any nongovernmental organisation or mental health program In Rwanda, each village is covered by 3 or 4 CHWs, each having their respective assignments. The study was approved by the ethical committee of the University Hospital of Kigali, CHUK. Participants provided written informed consent. A financial compensation of 2000 RwFr (2.33USD, Feb 2018 exchange rate) was provided in line with the recommendation of the Rwandan Government [22]. Study Population and Exclusion Criteria We recruited epilepsy trained CHWs in the Northern Province (group A) and untrained CHWs from the Eastern Province (group B). CHWs in Group A were enrolled if they had attended the 2014 ROAE epilepsy training. Group B consisted of CHWs from the Nyakariro sector who had never attended an epilepsy training. CHWs were identified by their supervisor and invited to a meeting in their village. Participants were excluded if they were illiterate or had relocated since 2014. All were native Kinyarwanda speaking. Epilepsy Training In 2014, 2,472 CHWs in the Musanze District received, between June and November 2014, an epilepsy training by the ROAE, organised in collaboration with and under supervision of local health authorities. Training elements included symptoms of epilepsy, including video cases, causes, treatment, prognosis, social aspects, etc., (Table 1) The content had been adapted to the schooling level of CHWs, of whom some are illiterate. Questionnaire A questionnaire was developed by the team, based on different published data [3,23,24]. After selection of questions, it was translated into Kinyarwanda. The translation was adapted to address cultural and linguistic aspects and the final version was validated by a neurologist. The self-administered questionnaire contained 4 questions on demographics and 14 questions on epilepsy knowledge/awareness, attitude and practice. We anticipated in the untrained group that CHWs would be unfamiliar with epilepsy and allowed for every question the option "not familiar". The principal investigator and three nurses assisted during the questionnaire administration to CHWs in groups by village or by district. Sample Size and Data Collection This was an explorative study as long-term retention of KAPs by CHWs, to our knowledge, has not yet been assessed. We aimed for enrolment of 100 CHW in each group following a feasibility assessment that the project would involve nearly 30 villages for each group. Also, as CHWs might have relocated, we anticipated a difficult recruitment of previously trained CHWs. Data collection was performed in January 2018 and March 2018. Statistical Analysis Double data entry into Google Forms was completed by an independent data specialist and data were extracted to Microsoft Excel. Statistical analysis, using STATA 12 software, was descriptive. Intergroup differences were calculated using two tailed Z-test or t-test as appropriate for continuous variables and Chi 2 -comparison for categorical variables using Yates correction where applicable. To explore the effect of training on long-term KAP change in CHWs, multiple count regressions were performed using the R software. First, we transformed our results to "desired" and "undesired" responses by question and by subdimension score (knowledge, attitude and practice) as per Table 3. Then we computed our results by counting the number of desired versus undesired answers for each question, correcting for undesired answers in multiple entry fields. We compiled a subdimension score by summing up the number of questions with a desired response and generated an overall score by summing all sub scores. The response option "not familiar with epilepsy" was considered as undesirable across all subdomains. Lastly, we fitted the multiple count regressions for each subdimension. Missing answers were accounted for by controlling for the number of answered questions. We also performed a sensitivity analysis in which we considered missing values as undesired answers using the same model. Demographic Characteristics A total of 199 CHWs were recruited, with 99 group A CHWs from 33 villages in the Muhoza, Cyuve and Musanze sectors, and 100 group B CHWs from 38 villages in the Nyakiriro sector. Upon data analysis and reconciliation with training records from 2014, three group A CHWs had not attended any epilepsy training and therefore were analysed in group B as untrained ( Table 2). CHWs were predominantly female and the trained group had a significantly higher proportion of older participants. In line with a more frequently urban provenance in group A, in contrast to rural provenance in group B, we also observed higher schooling level in this group. Awareness of Epilepsy Most participants had heard of epilepsy, yet significantly different favouring the trained group, with only one CHW in the trained group A not able to recall epilepsy as a disease, 3 years after training ( Table 3). Of interest is that over 85% of untrained CHWs had witnessed a seizure indicating good familiarity with seizures or patients living with epilepsy (PwE). This rate was similar in both groups. Knowledge and Attitudes Towards Epilepsy Among CHWs Epilepsy knowledge was better in trained CHWs, providing more correct answers on the cause of epilepsy. Epilepsy was recognised by more than 80% in group A and 60% in group B as a brain disease, yet epilepsy as madness or spiritual possession was reported by nearly 1 in 6 of trained CHWs. Overall, trained CHWs differ in attitudes towards PwE compared to untrained CHWs, both on items related to personal avoidance and fear as well as negative stereotypes, work and role expectations ( Table 4). Nearly 1 in 5 trained CHWs, however, considered epilepsy a possibly contagious disease. Varying response rates with more missing values were observed in group B. Between 1 and 20 untrained CHWs choose to respond "not familiar with epilepsy" to different questions. Interestingly in the untrained group, response rates regarding attitudes were higher than on awareness or knowledge. Treatment Practices by CHWs Group A CHWs were more likely to send suspect cases of epilepsy to health centres for diagnosis and treatment ( Table 3) with 99% of trained CHWs referring to medical facilities both relatives and unrelated persons. In both groups, there was a tendency to seek Training Effect and Multivariate Analysis The use of regression analysis allowed taking advantage of crosssectional data, by controlling for confounding factors such as gender, age, education and epilepsy awareness, such as knowing someone with epilepsy. The analysis demonstrated statistically significant effects of the training on awareness and knowledge, attitude and overall score, but not on treatment practice ( Table 5). No control variables showed statistical effects, except for epilepsy awareness that impacted the attitude dimension. In adidtion, previous exposure by knowing someone with epilepsy was associated with a better score on the attitude questions. Educational level seemed to trend for higher impact on across all subdimensions by higher compared to lower educational levels, yet this effect was small and not significant. As a missing result could have been intentional and therefore could have been classified as an undesired answer, a sensitivity analysis was conducted considering missing answers as undesired. Results from our initial analysis were confirmed. DISCUSSION In Rwanda, CHWs are key grassroot health contributors on public health in their communities and have been instrumental in improving perinatal care and the combat against malaria. To change misconceptions and beliefs on epilepsy in the community as well as to increase detection and referral of possible PwE, CHWs were trained on epilepsy in a selected region in Rwanda. Three years after the initial single training session, we compared KAPs from trained and untrained CHWs, to assess training gaps and needs for repeat training programs. According to our multivariate analysis, the training effect is the only observed variable that explains the difference in results between trained and untrained CHWs in terms of knowledge, attitudes, practices and overall KAP score, despite significant demographic differences between our samples. However, we could not control for the effect of provenance (rural vs. semi-urban) because of the imbalance of subjects in our dataset (97% of the untrained subjects were living in rural villages). This imbalance between samples was unexpected, as the overall profile of the Musanze and Rwamagana health districts are very similar and despite a large recruitment area of more than 30 villages. Group A was predominantly from an urban setting, which may also explain the significant difference found on schooling. Educational level alone cannot sufficiently explain our results as it accounted for only a small and non-significant effect on attitudes, practices and overall score. Interestingly, higher age may have negative contribution on desired KAPs which may be interpreted that KAPs are may be more difficult to change in older CHWs, yet this result requires cautious interpretation and further investigation. Epilepsy awareness was high in both groups, with only 1 in 9 untrained CHWs not having heard of epilepsy. In addition, knowing someone with epilepsy positively and significantly impacted attitudes towards epilepsy. Therefore, bringing PwE closer to CHWs may also be considered a strategy to decrease negative attitudes. Trained CHWs retained good knowledge and were able to detail main causes of epilepsy, signs and symptoms and treatment. Untrained CHWs listed often five or six symptoms observed during a seizure, which may in correspond to recognition of tonic-clonic seizures. The results of trained CHWs in the attitude towards epilepsy were encouraging with a significant difference compared to untrained CHWs on aspects of personal fear and social exclusion as well as work/role expectations. This lasting effect of the training course and understanding of the disease burden was remarkable. Yet, still one in 10 trained CHWs would exclude children with epilepsy from schooling. Moreover, up to 15% of trained and 20% of untrained CHWs considered epilepsy as a curse or madness. A single training did not eradicate some beliefs and likely that socio-cultural and traditional beliefs persisted [25,26]. Indeed, trained CHWs still reported epilepsy as possibly contagious in 17.7%, compared to untrained CHWs in 35.0%. This has been equally illustrated in other countries [11,19,25,26]. Epilepsy management choices were well retained in trained CHWs, but also up to 80% of untrained CHWs would refer to the health facilities. Interestingly, the referral of a relative in group A and B involved also more frequently traditional healer referral, with prayers as an important pathway, as illustrated in Uganda [27]. These referral patterns may prove very important for treatment seeking patterns in patients and require utmost attention [28]. In general, there was more uncertainty in untrained CHWs with more responding "not familiar with epilepsy" in epilepsy knowledge and management questions. Interestingly, for untrained CHWs, more responses on questions regarding attitudes were recorded than on knowledge. We hypothesize that in community's epilepsy as a disease is surrounded by socio-cultural beliefs and misconceptions, demonstrating the knowledge gap. Epilepsy Training Course Opportunities Addressing changing attitudes towards epilepsy will be important as some misconceptions persisted in a significant proportion of trained CHWs, e.g., nearly 1 in 5 trained CHW reported epilepsy still as a contagious disease. Future programs and training materials need to emphasize and elaborate on those identified training-resistant topics, mainly related to attitudes, personal vs. societal beliefs and cultural aspects. These future programs should equally address vulnerable persons living with non-communicable diseases in the local communities, including women and children. Considering CHWs are best placed to support referral of and adherence to treatment, including the use of anti-epileptic drugs in PwE, the removal of misconceptions is crucial [3,29]. Addressing an active engagement of CHW will also be important. In line with African culture, verbally transmitted information seems more impactful than written information [30]. CHWs are respected advocates and influencers in their local communities. It would be interesting to assess changes of KAPs on epilepsy or any condition in the lay population after a disease specific training program. This would, of course, require a scaling of testing at different time points with validated scales, sensitive to changes over time and to geographical and cultural differences. The contribution of CHW in reducing the epilepsy diagnosis gap and epilepsy treatment gap has been acknowledged in other SSA countries [31][32][33]. The ROAE training program in 2014 had two objectives: 1) increase the disease knowledge of CHW; and, 2) improve the referral of PwE to local healthcare centres ( Table 5). The impact of the CHW training in the referral of PwE to healthcare centres was sub-optimal. The number of PwE seen at the healthcare centres was much lower than anticipated on the reported prevalence of 49‰ for Rwanda [5]. The role and impact of CHW increases if properly equipped with adequate educational tools and the use of simple questions on screening for epilepsy proved efficient [29,30]. Although clear needs for repeat training programs were identified with persisting negative attitudes and knowledge gap, public health managers may consider initially to scale single training of CHWs on epilepsy nationwide and include more specific attitude, belief and behaviour training topics. Such single training at large scale may be more impactful than conducting repeat training programs in lower numbers of CHWs. In addition, we believe that future epilepsy training projects should be expanded towards other influencers in their communities, such as, traditional healers, schoolteacher and trainers at sports clubs. Indeed, inclusion of community influencers may increase the impact of epilepsy training course as they may be close to children and adolescents, representing a vulnerable population [12]. Also, traditional or faith healers may contribute the closure of the diagnosis gap and treatment gap. Healthcare seeking behaviour is complex and may be driven by erroneous beliefs, attitudes and limited social support [34]. In our study referral was geared to biomedical care although often recommendations for traditional healers would be made, even more so to relatives of the CHWs. Alternative care offered by traditional or faith healers, especially in rural areas, may delay the biomedical healthcare seeking behaviour. In acute paediatric conditions in Rwanda, the use of traditional healers was the most significant predictor of the delay [35]. An epilepsy training for these alternative healthcare providers, trusted and influencers in their communities, seems complementary and recommended to a CHW training [3]. Even more, involving all stakeholders across biomedical care, i.e., physicians, nurses and social workers, and CHWs, on one hand and traditional healers, on the other hand, may even increase impact by fostering mutual understanding, interaction, collaboration and creation of win-win contribution to holistic care [12,36]. In view of the demonstrated benefits of the CHWs epilepsy training and of their closeness and social acceptability to villagers, nationwide training programs with adequate tools, will further advance epilepsy care, reduce stigma, improve social integration at work and at schools in Rwanda, addressing an important burden given the high prevalence of epilepsy [5]. Study Limitations An important limitation relates to the absence of a pre-training and immediate post-training assessment of KAPs in the trained CHWs; hence, we could not assess a change vs. pretraining KAPs nor decay thereof over time. We recommend that future training programs use evaluation tools pre-and immediately post-training to track time sensitive changes of the impact of an intervention. Another limitation is the use of a non-validated questionnaire. The instrument was a Kinyarwanda translation based on selected elements from other questionnaires [3,23,24]. This may have induced unidentified gaps and untested items. Translation and validation of specific instruments in Kinyarwanda, such as the Stigma Scale in Epilepsy is recommended [3]. Third, we excluded illiterate CHWs from the study, which may have favored the training effect. On the other hand, education level did only have a non-significant trend towards more desired responses for higher educational levels. Conclusion Despite demographic differences between trained and untrained CHWs, our data suggest that even after 3 years, differences in KAPs were driven by a single epilepsy training. Further assessments of the impact of the training and the outcome for the PwE are recommended and easy to administer validated questionnaires should be applied, both pre-and post-training. A nation-wide training programs of CHWs with targeted information modules on epilepsy, focused on specific attitude, beliefs and practices, should be recommended. In addition to training, equipping CHWs with easy-to-use tools, such as screening questions and treatment guidance, could drive closure of the epilepsy diagnosis and treatment gap. Indeed, well-trained CHWs will contribute to change negative attitudes and beliefs toward epilepsy and promote positive behaviour toward PwE. With their support, PwE are likely to face less stigma and discrimination, to better access neurological care and to become better integrated in their communities. DATA AVAILABILITY STATEMENT Anonymized study data will be accessible upon request to the corresponding author and team approval, within a reasonable timeframe. ETHICS STATEMENT The study was approved by the ethical committee of the University Hospital of Kigali, CHUK, Kigali, Rwanda. The patients/participants provided their written informed consent to participate in this study. FUNDING The principal investigator (SM) received an unrestricted grant from UCB S.A. (Brussels, Belgium). The authors declare this study received funding from UCB S.A. (Brussels, Belgium), as part of the Corporate Societal Responsibility support provided to the neurology department of the CARAES neuropsychiatric hospital at Ndera, Kigali (Rwanda). The funder had the following involvement with the study: study design, data analysis and preparation of the manuscript.
The Critical Effect of Hydration on the Resonant Signatures of THz Biospectroscopy Here we present an original study of the effect of hydration on Terahertz absorption signatures in biomolecular (lactose monohydrate and biotin) and bioparticles( Bacillus thuringiensis and Bacillus cereus spores). We observe"read-shift"in center frequency with increasing hydration in all samples, consistent with Lorentzian-oscillator behavior. But the effect of hydration on linewidth is ambiguous, sometimes increasing the linewdith (consistent with Lorentzian behaviour) and sometimes decreasing the linewidth. Terahertz (THz) radiation lies in the range 300-10000 GHz, which currently is beyond the upper-frequency limit of wireless-communications and radar systems worldwide. It has been investigated for a variety of applications such as radio astronomy [1], security imaging [2], materials characterization [3], and medical diagnostics [4]. And of course, these applications benefit from the continual evolution of solid-state electronics and electromagnetics, such as resonant-tunneling oscillators up to 1.2 THz [5], amplifiers up to 1.0 THz [6], and frequencymultipliers and mixers up to roughly 2.0 THz [7]. Phenomenologically, MMW and THz radiation oscillate with a period comparable to the ps-timescale of collective vibrations in biomolecules and bioparticles, and also to the characteristic orientation times of dipolar liquid-water molecules. Because of this both are absorbed strongly by soft tissue of all sorts, but the collective interaction of molecules and particles can be more resonant with respect to excitation frequency. In principle, this makes it possible to distinguish the biomolecular excitation from water absorption, which generally causes heating. Heating in biomaterials is complicated and potentially confusing since it creates its own strong bioeffeccts. For many years THz spectroscopy has been touted as a complement to FTIR and microwave vapor-phase spectroscopies, providing information about vibrational resonances from the tertiary structure of biomolecules and bioparticles rather than their interatomic vibrations or rotations. The challenge in THz spectroscopy has been achieving adequate selectivity and specificity needed to distinguish these signatures from various forms of "clutter", such as standing waves in sample holders and scattering effects (e.g., speckle) in grainy materials. Another challenge is measuring biosignatures "in-situ", meaning in hydrated media for biosamples. The strong absorption from liquid water makes spectroscopy virtually impossible for most living organisms in vivo, and even most aqueous solutions. The lack of powerful THz sources (i.e., average output ≥ 1 mW) exacerbates the problem since traditional methods for clutter mitigation, such as the placement of attenuators in the signal path to suppress standing waves, tend to reduce the THz signal-to-noise ratio deleteriously. In this work we show that the hydration level for THz signatures is "critical", meaning that it must lie in a certain range for signatures to be readily measured. For polar molecules having a strong enough vibrational resonance, such as lactose monohydrate, the lower end of the range extends to zero and the upper end to roughly 20% hydration by mass. For non-polar molecules or bioparticles, such as Bacillus spores, the lower end of the range rises above zero to roughly 10% hydration, but the upper end remains at about 20%. In addition, hydration usually causes a reduction (i.e., "red shift") in the signature center frequency which will be described later in the document. Experimental Methods and Samples The spectroscopic experiments were carried out with a high-resolution frequencydomain, coherent (photomixing) spectrometer like those described in References [8] and [9]. The spectra were measured in stepped-frequency mode at 1.0 GHz steps and 0.1 s integration time. The typical background spectra and noise floor are shown in Fig. 1(a) where the tuning bandwidth is seen to be at least 1.7 THz, and the dynamic range is ≈80 dB at the low end of the scan (200 GHz) and ≈40 dB at the high end (1.9 THz). The frequency range is determined mostly by the limited temperature tuning of the DFB diode lasers used in the spectrometer. For the present experiments, the frequency resolution is determined by the 1.0 GHz steps, although the instantaneous THz tone is much narrower than this. For the present study we chose two biomolecular samples and two bioparticle samples, all known from previous experience to have strong signatures in the THz region and within the operating range of our photomixing spectrometer. The biomolecular samples were lactose monohydrate and biotin, each having the chemical compositions shown in Fig. 2(a) and (b), respectively. The former is a crystallized form of milk sugar (monoclinic structure), and the latter is crystallized vitamin B7 (rhombohedral structure). Both molecular crystals have built-in polarity, are hydrophylic, and appear macroscopically as white powders with submicron grains. The lactose and biotin samples were mounted in metallic rings and encapsulated with thin (~12 um) polycellulose windows to slow the rate of evaporation. Two identical samples were made for each biomolecular type and loaded to the same approximate level of hydration, starting at between 25% and 30% gravimetric (accurate to ≈0.1%). One of the two was left on a precision balance, and the other mounted in the THz spectrometer. Then THz spectra were taken roughly every 15 minutes, corresponding to a typical drop of hydration on the balance-sample of ≈2% per spectrum. The bioparticle samples were endospores of two Bacillus species: B. thuringiensis (Bt), subspecies kurstaki, and B. cereus (Bc). Bt is a common soil bacteria and insect pathogen [10][11]. Upon sporulation, this bacteria produces bipyramidal protein crystals, -endotoxins or Cry proteins which are toxic to a variety of insects but are not harmful to humans. While Cry toxins have been widely applied as a "green" pesticide, recent use introduces Cry proteins into transgenic crops, providing a more targeted approach to insect management. The endospore and crystal are simultaneously present after an environmental stress induces sporulation in the vegetative cell, but are typically released after cell lysis. The spores can endure harsh environments such as aridity, extreme cold and heat and some radioactivity. When nutrition, temperature and other environmental conditions become favorable, spores germinate, develop into rapidly growing vegetative cells and begin another life cycle [12]. More relevant to biosensing and the CBRNE underlying theme of this NATO Workshop, Bt and Bc are in the same genus as B. anthracis [13], the notorious "Anthrax" bio-toxin. So the identification of THz signatures belonging to Bt or Bc could provide impetus for an important application of THz spectroscopic sensing, but without the handling hazards of standard biosensing techniques. The Bacillus spores were cultured from isolated stock bacteria in 50 ml tryptic soy broth (TSB; Becton, Dickinson, Sparks, MD) at 30 C. Bacterial cultures were transferred to a test tube and centrifuged at 5000·g for 30 min. The liquid broth was removed, and then the bacteria were maintained at 4 o C in the sealed test tube. Just prior to measurement, a concentrated, 0.2 mL paste-like sample of Bt or Bc bacteria (without broth) was removed from the sealed tube, and spread evenly with a spatula over ≈ 2 cm diameter on the filter paper. The thickness of the resulting samples ranged from 200 to 600 um. Scanning electron micrographs (SEMs) for the Bt and Bc pastes are shown in Figs. 2(c) and (d), respectively, each showing a large fraction of spores as proven by the zoom-in of Fig. 2(c). Three samples of Bt were tested for the present study: sample Bt1 containing vegetative cells but negligible spores; sample Bt2 containing a mixture of vegetative cells and spores at low hydration; and sample Bt3 containing vegetative cells and spores at modest hydration. Also three samples of Bc were tested: sample Bc1 containing a mixture of vegetative spores at a hydration level comparable to Bt3; sample Bc2 the same as Bc1 but dried out thoroughly to have negligible hydration; and sample Bc3, the same as Bc2 but re-hydrated to a saturated level ("muddy" texture). Because of our lacking knowledge of the broth compositions and the degree of centrifuge concentration, the hydration levels are only qualitative and based on visual observation. For all samples the spectrometric procedure was as follows. First, the THz beam path was blocked completely to obtain the noise floor spectrum PN(). Then a blank sample holder was added to the beam path: an empty encapsulated ring for the biomolecular sample, or bare filter paper for the bioparticle sample. In both cases this yielded the background spectrum PB(). Then the real samples were added to the beam path to yield the sample spectrum PS(). Finally, the transmittance was calculated as T() = [PS()-PN()]/[PB()-PN()]. For the biomolecular samples, the time between successive scans corresponding to different hydration levels, was approximately 10 minutes. Biomolecules The experimental transmission results for lactose monohydrate and biotin are displayed in Fig. 3(a) and (b), respectively. The "dry" spectrum of lactose monohydrate is in good agreement with previous, high-resolution experiments and signature analysis, showing that the signature is well fit by a Lorentzian model [14]. In the dry state of Fig. 3(a), the signature center frequency is 530 GHz and the fullwidth at half-maximum (FWHM) is ≈27 GHz. But when the hydration increases above ≈10%, the center frequency "red-shifts" significantly. The maximum shift is ≈4.0 GHz at 26% hydration: well-resolved by the coherent frequency-domain (photomixing) spectrometer used in this study. For the biotin sample, the dry center frequency is ≈540 GHz with FWHM of ≈ 55 GHz -much broader than the lactose [14]. The maximum shift is ≈16 GHz at a hydration of 22%, and the shifted spectrum is significantly broadened compared to the "dry" spectrum -by approximately 50%. Upon further inspection, it appears that beyond 20% hydration in Fig. 3(b), the signature may be transforming into a "doublet", and beyond 25% may be saturating at the noise floor of the instrument. This exemplifies a key problem with THz biospectroscopy, which is that the aqueous local environment is often critical. That is, water in moderate concentrations generally strengthens the THz signatures, particularly by inducing a polarity if one doesn't already exist or is weak. But at larger concentrations the water masks the THz signatures at least through Debye absorption. This is evident in the lactose data of Fig. 3(a) at 29.0% hydration, and in the biotin data of Fig. 3(b) at 25.5%, where the transmittance spectra saturate at the signature-center frequency because the transmitted signal drops into the instrumental noise floor. Bioparticles The transmission spectra of samples Bt1, Bt2, and Bt3 are plotted in Figs. 4(a), (b), and (c), respectively. Sample Bt1 having negligible sporulation and low hydration, shows a relatively high transmission but no obvious signatures except for an anomalous one near 700 GHz. Sample Bt2 having significant sporulation and also low hydration, shows four distinct signatures centered at 966, 1026, 1098, and 1164 GHz. The last two coincide with well-known and strong water-vapor absorption lines [15], but the others do not. And because Bt2 has spores but Bt1 is lacking, the 966 and 1026 signatures are attributed to the spores [16] and labelled  and , respectively, in 4(b). There are also two broader, weaker features centered around 480 and 660 GHz that are not near any water vapor lines, and are addressed further below. The transmission spectrum of samples Bt3 in Fig. 4(c) shows six distinct signatures natures centered at 556, 752, 955, 1015, 1098, and 1164. The last two are the same water lines as above, and the first two are also well-known but weaker water lines. The middle two are similar in shape and strength to the  and  signatures in sample Bt2, but both are red-shifted in center frequency by exactly 11.0 GHz. Given our effective resolution of 1.0 GHz, these cannot be attributed to instrumental drift or frequency error. So we associate them with the same sporerelated signatures,  and and are labeled as such in Fig. 4(c) The stationarity of the strong water-vapor lines at 1098 and 1164 GHz support this assertion, and exemplifies the utility of water vapor lines as frequency markers. Two broader signatures are seen at 510 and 680 GHz. It is tempting to associate these with the 480 and 660 signatures in Fig. 4(b). However, the shift in frequency of both (30 and 20 GHz, respectively) is much greater than for the  and  signatures, and in the opposite sense (blue shift rather than red shift). They could possibly be associated with vegetative cell material, which is supported by the presence of a broad dip around 480 GHz and the anomalous feature at 700 GHz in Fig. 4(a) (for the cell-rich, spore-sparse sample Bt1). However, more research is necessary to strengthen this claim. The transmission spectra of samples Bc1, Bc2, and Bc3 are plotted in Fig. 5. Sample Bc1, known to have a significant fraction of spores, shows a predominant signature centered at 880 GHz. It is similar in shape and depth to the  signature of samples Bt2 and Bt3, but more than 100 GHz lower in frequency. The low background transmission at frequencies well below 880 GHz is consistent with the broadband (Debye) absorption expected from the hydration, and is similar to that for the hydrated Bt samples in Figs. 4(b) and (c). When Bc1 was thoroughly dried to become Bc2, the signature completely disappears and the background transmission rises dramatically, similar to sample Bt1 in Fig. 4(a). Finally, when Bc2 was re-hydrated to levels even greater than Bc1 or Bt3, the resulting Bc3 transmission dropped to the lowest levels seen with Bc2. And the signature seen in Bc1 did not reappear in any form. This suggests that Bc3 had too much hydration, likely much higher than the 25.5% where the biotin signature began to deform in Fig. 3(b). And it is strong evidence that these Bacillus spore signatures are critically dependent on hydration: with too little hydration, there is no signature like Bc2, and with too much hydration there is no signature like Bc3. Analysis As in most forms of spectroscopy, there is often a wealth of physical information in the lineshape of absorption signatures. Lineshape analysis has already been carried out for the biomolecules [14,9] and bioparticles [16] studied here under dry or unknown-hydration conditions. In all cases, the line absorption coefficients  were found to be well fit by the famous Lorentzian function for dipole oscillators. The frequency dependence in the most general case is given as where  is the circular frequency of incident radiation, 0 is the natural circular oscillation frequency (in the absence of damping),  is the relaxation frequency (i.e., damping constant), and A is a constant that depends on the density and effective charge of the dipoles, and possibly other factors depending on the type of oscillator. Analytic evaluation of the peak absorption frequency r is defined implicitly through (d/d|r = 0, which leads to r = [(0) 2 - 2 /2] 1/2 (2) This is monotonically decreasing with respect to , meaning that r is always "redshifted" with respect to the natural frequency. This is a well-known, and intuitive result from classical mechanics and electromagnetics. Another important prediction of the Lorentzian lineshape function is the linewidth, defined generally as the "fullwidth at half-maximum" (FWHM), or . To find this, we solve for the two values of  in (1) at which  drops by -3 dB from its peak value [determined by setting  = r in (1)], which is p = A[(0) 2 - 4 /4] -1 . For  << 0, this becomes p ≈ A(0) -2 , and the -3 dB frequency is found to be d ≈ 0(1 +/-/20), so that  ≈  Given that the Lorentzian model is known to be a good fit to the strongest biosignatures studied here, we can examine the data with a logical inquiry. The absorption peaks were found to red-shift with increasing hydration, which is also consistent with the Lorentzian model if the damping increases with hydration too. This is a reasonable physical assumption, and if true, then the linewidth should also increase with hydration as  ≈ We will investigate this next for both the biomolecular and bioparticulate signatures Fig. 6(a) always red-shifts, the biotin frequency dropping rapidly above ~10% hydration, and the lactose not changing significantly until ~20%. The linewidth is obtained by a Lorentzian fitting, and the FWHM (linear frequency) is f = /2= /2The results are plotted in Fig. 6(b), showing that the biotin at zero hydration is almost twice as broad as the lactose, and then increases dramatically above ~10% consistent qualitatively with the red-shift in center frequency. In contrast, the lactose FWHM does not change significantly, even above the ~20% hydration where the center frequency drops. Biomolecules To take the analysis one step further, we can use the center frequency from 6(a) to calculate the Lorentzian linewidth using a rearrangement of Eq. {2[(0) 2 -(r) 2 ]} 1/2 /2This is plotted in Fig. 7 in comparison with the experimental f for lactose [7(a)] and biotin [7(b)], using f0 = 0/2 =530.4 GHz (from lactose dry sample), and f0=541.3 GHz (biotin dry sample). Clearly, the agreement for lactose is not good, the Lorentz model predicting a significant change in linewidth above ~20% hydration that is not observed experimentally. For biotin, the agreement is better qualitatively, but the Lorentz model predicts a linewidth increasing much faster with hydration than the experiment does. Bioparticles For the bioparticles (Bacillus endospores), we cannot do such a precise analysis with the Lorentz model because we do not know 0, the dry resonant frequency. As mentioned earlier, this is probably because hydration plays two roles with the endospores: (1) induces polarity and (2) increases damping. So no hydration means that no signatures occur in the non-polar endospores. Nevertheless, we can check for consistency with Lorentzian behavior through the experimental center frequency and linewidth for the two signature-displaying samples of Fig. 4: Bt2 and Bt3, where the hydration of Bt3 is higher than that of Bt2. Each sample displayed two distinct signatures (labeled  and in Fig. 4) whose center frequencies and FWHM are plotted in Figs. 8(a) and (b), respectively. Both center frequencies decrease by 11 GHz with the increased hydration, consistent with Lorentzian behavior if increased hydration creates increased damping. However, the linewidth behavior in Fig. 8(b) is ambiguous. The signature linewidth increases with hydration, consistent qualitatively with Lorentzian behavior. But the  signature linewidth decreases with hydration, similar to lactose monohydrate in Fig. 6(b), but even more opposed. This is something we are investigating further through experiment and modeling. Conclusion We have presented the first known study of the effect of hydration on THz absorption signatures in biomolecules and bioparticles. We observe a "red-shift" in center frequency with increasing hydration in both biomolecules and bioparticles, consistent with Lorentzian-oscillator behavior. But the effect of hydration on linewidth is ambiguous, sometimes increasing the linewidth (consistent with Lorentzian behavior) and sometimes decreasing the linewidth.
Geophysical constraints on the reliability of solar and wind power in the United States † We analyze 36 years of global, hourly weather data (1980–2015) to quantify the covariability of solar and wind resources as a function of time and location, over multi-decadal time scales and up to continental length scales. Assuming minimal excess generation, lossless transmission, and no other generation sources, the analysis indicates that wind-heavy or solar-heavy U.S.-scale power generation portfolios could in principle provide B 80% of recent total annual U.S. electricity demand. However, to reliably meet 100% of total annual electricity demand, seasonal cycles and unpredictable weather events require several weeks’ worth of energy storage and/or the installation of much more capacity of solar and wind power than is routinely necessary to meet peak demand. To obtain B 80% reliability, solar-heavy wind/solar generation mixes require sufficient energy storage to overcome the daily solar cycle, whereas wind-heavy wind/solar generation mixes require continental-scale transmission to exploit the geographic diversity of wind. Policy and planning aimed at providing a reliable electricity supply must therefore rigorously consider constraints associated with the geophysical variability of the solar and wind resource—even over continental scales. Beyond 80%, the required amount of energy storage or excess solar/wind generating capacity needed to overcome seasonal and weather-driven variabilities increases rapidly. Today this would be very costly. The availability of relatively low cost, dispatchable, low-CO 2 emission power would obviate the need for this extra solar and wind or energy storage capacity while meeting reliability requirements over a multi-decadal timescale. Many proposed low-carbon electricity systems assume primarily wind and solar energy to generate electricity, 1-14 but the variable and uncertain nature of these resources raise concerns about system reliability. The current North American Electricity Reliability Corporation (NERC) reliability standard specifies a loss of load expectation of 0.1 days per year (99.97% reliability). 15 Hence, for energy systems that extensively use wind and solar generation, a statistically robust evaluation of long-term reliability, especially given the 40-50 year useful life of deployed energy infrastructure, requires evaluation of the co-variability of the solar and wind resource over a multidecadal time scale. Studies of wind energy for 1 year for 60 European stations; 16 1 year for 17 wind farms in Denmark; 17 5 years for 11 stations on the U.S. East Coast; 18 or 45 years for 117 stations across Canada, 13 have shown that after removal of diurnal cycles and seasonal trends, the variability of wind energy on hourly time scales decreases exponentially with distance, with a characteristic correlation length of 200-500 km. Analysis of 44 years of 6 hour-averaged wind energy data derived from a weather model yielded a correlation length of 400-600 km over Europe. 19 Anemometer data from nine tall-tower sites spanning the contiguous U.S. have been used to estimate the occurrence of low-wind-power events over large areas. 20 Solar and wind co-variability studies of relatively small geographic regions (e.g., the United Kingdom, California, the Iberian Peninsula) over timescales of months to a few years have indicated that the combination of solar and wind resources can decrease the variability intrinsic to either resource alone. 2,12, 21 Energy system models that include a range of generation sources and embedded assumptions about the relative costs of different generation technologies as well as transmission and storage have been constructed to obtain cost-optimized solutions for specific deployment scenarios that include various penetrations of carbon-free generation in the electricity mix. 1,5,8,9,22,23 Examples include an economic optimization of solar, wind and natural gas generator deployment along with grid construction using 3 years of solar and wind over the contiguous US, 22 an assessment of US and global wind power availability using 5 years of reanalysis data 24 and computationally expensive capacity planning and dispatch algorithms that restrict analysis to representative hours, days and weeks for the US 8,14,23 or Europe. 25 Electricity decarbonization scenarios, primarily consisting of solar and wind, have been presented over a range of areas spanning individual US states to whole continents. 3,4,10,11,13,[26][27][28][29][30] We use herein a simple, transparent model, that is independent of detailed economic or technological assumptions, to assess the geophysical constraints imposed by wind and solar resources over the continental U.S. ''reanalysis'' products, which assimilate a wide array of observational data to deliver an internally consistent historical representation of weather, facilitate assessment of multi-decadal solar and wind resource (co)-variability. 26 Accordingly, we have used a reanalysis data set 31 covering 1980 to 2015 with hourly solar and wind resource data to evaluate the spatiotemporal variability of the solar and wind resources over the continental U.S. and the ability of these resources to satisfy U.S. electricity demand, instantaneously and in aggregate, as a function of (1) the amount and spatial extent of installed solar and wind capacity, and (2) various amounts and temporal qualities of energy storage-but with no other generation sources utilized. Our results thus describe the geophysically based variability of energy production from solar and wind resources and identify the constraints that this variability places on reliably meeting U.S. electricity demand in this limiting case. Further, we quantify the duration and magnitude of residual ''gaps'' when electricity generation fails to meet demand that are a consequence of the geophysical variability of these resources even over continental scale. The details of our analytic approach, and the sources of data, are described in the Methods section. First, global, hourly surface solar fluxes and wind speeds (50 m height) from a longterm (36 year) reanalysis data set (MERRA-2) were used to estimate the resources available each hour at a spatial resolution of 0.51 Â 0.6251. Higher temporally and spatially resolved wind resource data do not span as long a time series and/or encompass comparably large geographic areas. 1,8,22,23,29,30 The MERRA-2 data set is therefore well-suited to investigate correlations of the wind resource and co-variability with the solar resource over long length and time scales, without emphasis on rapid variability on sub-hourly timescales. Second, hourly solar and wind power production profiles over areas of interest were calculated simply using an area-weighted average of each resource, as opposed to assuming specific cost-optimized locations or regions for either solar or wind energy production. 8,22,32 The installed local nameplate generation capacity was estimated using representative real-world capacity factors. These separate power production profiles were then combined considering various mixes and amounts of solar and wind generation. Finally, to assess the resulting energy system reliability over the time period of interest, the energy produced was compared to the energy demanded in each hour, as a function of the assumed energy storage capacity. One year of actual U.S. electricity demand, from July 2015-July 2016, was compared to all 36 years of resource data (see SOM for complete details). Perfect transmission and energy storage, with no losses or constraints, was assumed, yielding a best-case scenario for realizing the benefits of geographic anti-correlation of the resources and to allow isolation of the limitations associated purely with geophysical characteristics of wind and solar energy resources. Specific transmission constraints, 8,32 higher-resolution resource data, 22 energy storage inefficiencies, 7 optimization of the choice of generation locations to minimize their mutual correlation as opposed to maximization of local energy production, and operational limits and market dynamics, among other practical considerations, will play important roles in determining the details of system-and site-specific design and operation of an actual electricity system of this magnitude. Fig. 1a shows the seasonal variability of the solar and wind resources when aggregated over the contiguous U.S. (referred to as CONUS) normalized by the 36 year mean. For each day, the median and range across the 36 years is shown. The solar resource reaches a maximum during summer (June-July) that is 3.7 times the winter minimum (December-January; yellow curve in Fig. 1a). In contrast, the wind resource peaks in spring (Mar-Apr) and decreases to by up to a factor of 3.5 during the summer months (July-August; purple curve in Fig. 1a). For comparison, recent electricity demand (July 2015-July 2016) is greatest during the summer (July-August) and decreases by a factor of 1.8 to its minimum in spring (March-early May; orange curve in Fig. 1a). As can be seen by the shading in Fig. 1a, the wind resource has much greater daily variability about its median than the solar resource, particularly in the winter and spring when the median daily wind resources are largest. The wind resource also exhibits greater variability than the solar resource on the time scale of days to weeks. Resource and demand variability The seasonal variability depicted in Fig. 1a indicates some fundamental trade-offs when electricity demand is to be met with large quantities of solar and wind generation aggregated over the CONUS. Combining solar and wind generation can alleviate some of the seasonal deficiencies of each, but the generation complementarity is limited by the large variability in the wind resource on a wide range of time scales, as well as by the substantial difference in the amplitude and seasonal cycles of the solar resource relative to either wind or electricity demand. Fig. 1b and c show the daily variability of the solar and wind resource and electricity demand during the summer (June, July, August) and winter (December, January, February), respectively. Unlike the solar resource, in aggregate, the wind resource and electricity demand have relatively small daily cycles and never reach zero (Fig. 1b and c). However, at hourly time scales the wind resource is much more variable than either the solar resource or demand ( Fig. 1b and c). Without energy storage, the extreme daily cycle of the solar resource also places a substantial constraint on reliability, requiring high ramp rates of additional generation in the morning and evening hours. Area-dependence and resource mix Throughout, we evaluate two levels of installed combined solar and wind capacity: installations sufficient to generate the total integrated electricity demanded in the U.S. July 2015-July 2016 according to mean annual resources over our 36 year record, and installations sufficient to generate 150% of total U.S. electricity over that period. We refer to these as 1Â and 1.5Â (read ''1.5 times'') generation, and use them to gauge the reliability benefits of deploying excess generation capacity. Similarly, we discuss energy storage in units of time of mean demand. For example, in the U.S., total mean power demand is 450 GW, so 12 h of storage equals 5.4 TW h of energy capacity. The temporal correlation of the wind resource decreases as a function of distance and declines more rapidly in the east-west (longitudinal) direction than in the north-south (meridional) direction. 22 Generating wind energy over larger ''aggregation areas'' is most effective at reducing the hourly and daily energy production variability. Long distance, high-voltage transmission is a proposed technical approach to aggregate these temporally de-correlated resources. 14,22 View Article Online The temporal correlation of the solar resource for the contiguous U.S. decreases relatively weakly with distance (There are only four hourly time zones in the CONUS). Energy storage, demand management, and/or separate carbon-neutral, flexible generators are necessary to overcome the daily solar cycle (generation values 4100% of peak capacity do not help when production is zero (dark hours)), while energy storage, demand management, separate carbon-neutral, flexible generators and/or installation of excess name-plate solar and wind capacity (generation of 41Â) are needed to overcome the seasonal resource cycles. Fig. 2 shows the total annual electricity demand that can be met (i.e., reliability) by using solely solar and wind resources, as a function of the mix of solar and wind energy generation contribution (vertical axes in each panel) as well as the area over which the resources are aggregated (horizontal axes in each panel). Without storage and with 1Â generation, the highest reliability is obtained from wind-heavy generation mixes (e.g., 60-75% wind), with an increase from o60% reliability when the resources are aggregated over city-sized areas (10 3 -10 4 km 2 ) up to 475% reliability when the resources are aggregated at CONUS scale (10 7 km 2 ; Fig. 2a). The overall patterns are similar with 1.5Â generation and no storage, but reliability increases by 10-12%; the effect on reliability is larger when the resource is aggregated over larger areas (Fig. 2c). Wind and solar resources could in principle provide an arbitrarily high percentage of electricity generation with high reliability (at some cost penalty) if the resulting intermittency were fully compensated by dispatchable power such as natural gas plants, pumped hydroelectricity, demand management, and/or for example rampable nuclear power. To focus on the reliability gaps associated with utilization solely of wind and solar resources, we consider herein systems that fill the resulting unmet demand solely with energy storage. Fig. 2b and d display the effects of energy storage equal to 12 hours of mean demand (e.g., batteries, pumped hydroelectric reservoirs, power-to-gas-topower, compressed-air energy storage, thermal storage, etc.) on system reliability for various mixes of wind and solar generation. The availability of 12 hours of energy storage shifts the optimal generation mix towards a solar-heavy system; in this scenario mixes comprising energy generation contributions of 70-75% from solar, with the balance from wind, produce the highest reliability ( Fig. 2b and d). This change in the optimal generation mix results from storage smoothing out the daily solar cycle such that seasonality becomes the main limit on increasing reliability. In the absence of storage, no amount of solar capacity can overcome the daily cycle, hence supporting technologies are obviously needed to meet nighttime demand. The availability of storage also substantially decreases the importance of the resource aggregation area in this deployment scenario. For example, the highest reliability for solar and wind generation mixes aggregated over an area the size of New Hampshire (B2 Â 10 4 km 2 ) increases from 65% to 85% with the addition of 12 h of storage ( Fig. 2a and b). The presence of 1.5Â generation again increases reliability by B10%. When generation is aggregated over areas 410 6 km 2 , the combination of 1.5Â generation and storage allow solar and wind resources to (c) 1.5Â generation, no storage; (d) 1.5Â generation, 12 hours of storage. These plots were generated by running each scenario for all 50 states, 8 NERC regions, and the contiguous U.S., respectively. For each resource mix simulated, the results were regressed (y = log(x) + b) and plotted as the shown heat maps. Thus, the plots represent the average area-dependence and effect of resource mix on ability to meet the total annual electricity demand in the contiguous U.S.; specific regions will be more, or less, reliable. View Article Online meet 495% of total annual electricity demand with relatively little sensitivity to the resource generation mix (Fig. 2d). Storage and generation Storage is critical for solar-heavy wind/solar mixes to approach meeting 80% of total annual electricity demand, with a decreasing marginal return on additional storage after the daily cycle is smoothed (B12 hours of mean demand storage capacity). In contrast, relatively small quantities (o3 hour of mean demand) of energy storage are beneficial to further decreasing the variability of geographically diverse wind resources. In both cases, meeting the last B20% of total annual electricity demand with only wind and solar generation requires substantial increases in the quantities of installed capacity and/or storage. The marginal return on this additional capacity is low, and the marginal benefit on reliability decreases further as the reliability increases ( Fig. 3 and Fig. S7 and S8, ESI †). Fig. 3 further illuminates the relationships among the reliability of solar and wind power generation in meeting total annual electricity demand, the availability of energy storage, and the generation quantity when the resources are aggregated across the CONUS. Fig. 3a and b focus on satisfying the first B80% and last B20% of total annual electricity demand using linear and logarithmic x-axes, respectively. From top to bottom in Fig. 3a and b, the generation mix shifts from solar to wind, with the lines in each plot representing scenarios having different capacities of energy storage (0 and 12 hours, and 4 and 32 days are shown). The capacities of storage are again based on the mean annual energy demand, and the generation is similarly the total energy generated divided by the total energy demanded over the 36 year period (the horizontal red line indicates a generation of 1Â, i.e. the total energy generated equals the total energy demand). View Article Online Consistent with prior studies over smaller length and/or time scales, 22 with 1Â generation and no storage, the MERRA 2 reanalysis data indicate that CONUS-scale wind power could in principle meet 78% of total annual electricity demand (i.e. intersection of dashed 1Â generation line and 0 hours of storage scenario line, Fig. 3a). For comparison, solar power generation equal to annual mean demand could meet 48% total hourly electricity demand without any energy storage; with 12 hours of storage, this amount of solar power could meet 85% of hourly demand (Fig. 3a). Beyond this level of storage with only solar generation, the benefits on reliability diminish substantially (clustering of lines for 100% solar in Fig. 3a; flattening of yellow curves in Fig. S12 and S13, ESI †). In contrast, the addition of energy storage produces only modest increases in reliability for aggregated wind resources, with diminishing benefits beyond B3 hours of storage due to the relatively high variability of wind power in conjunction with the lack of a strong daily cycle in the wind resource ( Fig. 3a and Fig. S9, ESI †). Several weeks' worth of energy storage or high amounts of overbuild would be needed to produce high reliability from a wind-only system, even if aggregated to 43000 km length scales (Fig. 3). The amount of additional installed solar and wind capacity (generation 41Â) needed to produce an increase in reliability is indicated by the slope of each line in Fig. 3a (see also, Fig. S7 and S8, ESI †). In under-generation cases (below the dashed line), the increase in reliability is constant as the installed capacity increases (i.e. slopes are constant), except for cases that include solar with o12 hours of energy storage, for which the benefits of increased reliability diminish at lower levels of storage. When generation is 41Â (above the dashed line), additional installed capacity results in diminishing returns as the reliability increases (i.e. slopes are increasing). Regardless of the wind/solar resource mix, meeting the final B20% of total annual electricity demand with only solar and wind resources-even when aggregated at continental scale-will require generation quantities c1Â and/or substantial amounts of energy storage ( Fig. 3b and Fig. S7 and S8, ESI †). Fig. 3b demonstrates the technical feasibility of meeting up to 99.99% of demand with wind, solar and storage. 15 Meeting 99.97% of total annual electricity demand with a mix of 25% solar-75% wind or 75% solar-25% wind with 12 hours of storage requires 2Â or 2.2Â generation, respectively. Increasing the energy storage capacity to 32 days reduces the generation need to 1.1Â for these generation mixes. Unmet demand In the absence of c1Â generation and/or storage, the largest and most persistent periods when daily demand cannot be met during the 36 year period analyzed herein coincide with the seasonal minima of the dominant resource used (solar or wind). Even in scenarios with high annual mean reliability, o70% of daily demand is satisfied for many days in the 36 year record, suggesting that a large backup capacity and/or large quantities of demand management would be needed; instantaneously (one-hour period) many hours occur when o40% of demand can otherwise be met. Moreover, the relatively high variability of the wind resource results in sporadic periods, at times not coincident with the seasonal minimum of wind resource, when demand cannot be met. For all mixes evaluated, the dark hours, with no consequent solar energy production, are the most difficult time to meet demand. Fig. 4 shows the temporal characteristics and extent of satisfied daily demand in 4 different infrastructure scenarios throughout the 36 year period, again assuming that the resources are aggregated over the CONUS (the full suite of infrastructure cases are explored in Fig. S2-S11, ESI †). In a wind-heavy mix with 1Â generation and no storage, 81% of total annual electricity demand over the 36 years is met. As little as 28% of daily demand is met on the day with the highest unmet demand in the 36 year period, while instantaneous (one-hour period) periods of the highest unmet demand in the 36 years satisfy as little as 12% of demand, concentrated after dark during summer months ( Fig. 4a and b). Adding 0.5Â generation (total of 1.5Â) and 12 hours of storage to the windheavy mix greatly improves the overall reliability (98.3% of total annual electricity demand is met), but substantial gaps in meeting demand nevertheless occur on summer days (days occur with only 48% of demand met), when the wind resources are low and demand is high (Fig. 4c, d and 1a). For a solar-dominated mix with 1Â generation and no storage, 62% of total annual electricity demand is met. In this scenario, every month contains nighttime hours with unsatisfied demand -days occur with as little as 34% of demand met, while onehour periods occur with as little as 4% of demand satisfied ( Fig. 4e and f). The addition of 0.5Â generation (total of 1.5Â) and 12 hours of storage to this solar-heavy mix substantially increases the total annual electricity demand met to 98.3%. However, there are still days where only 51% of demand is met and hours when as little as 7% of demand is satisfied, typically occurring during winter periods when solar energy production is low and when wind has failed to charge storage, due to weatherdriven intermittency ( Fig. 4g and h). Better storage management can increase minimum fractions of instantaneous and possibly daily demand satisfied at the expense of decreasing the fraction of instantaneous and daily demand satisfied at other times; however such an approach will not impact the total annual electricity demand satisfied. With accurate forecasts of solar and wind energy and electricity demand, in conjunction with improved demand management, decision makers might be able to discharge storage more slowly to reduce the overall backup capacity that would be required to satisfy instantaneous demand. Discussion The amplitude and timing of the daily and seasonal cycles of the solar and wind resources and electricity demand, as well as the inherent variability of these cycles, present challenges for deep decarbonization systems that are heavily reliant on solar 25% solar-75% wind resource mix, 1Â generation and no energy storage; (c and d) 25% solar-75% wind resource mix, 1.5Â generation and 12 hours of energy storage; (e and f) 75% solar-25% wind resource mix, 1Â generation and no energy storage; (g and h) 75% solar-25% wind resource mix, 1.5Â generation and 12 hours of energy storage. In each scenario, the left panel shows the magnitude of demand met for each day throughout the 36 year period, and the right panel details the energy profile over a weeklong period when meeting demand is particularly difficult. A more comprehensive set of these results is shown in Fig. S2-S11 in the ESI. † View Article Online (''total annual electricity'' demand is distinct from ''instantaneous'' hourly averaged power demand, which is discussed later). Meeting the last B20% of total annual electricity demand requires overcoming the seasonal cycles of the solar and wind resources as well extreme weather-related events. With an infinite amount of idealized energy storage, in principle, variable electricity demand could be met with 100% reliability using wind and solar generation with no overbuild. For modest amounts of overbuild, several weeks' worth of electricity storage would be required to produce a reliable electricity system using only these primary energy sources. However, as discussed below, current costs of storage would need to decrease by an order of magnitude or more to constitute an economically feasible solution. The results presented in Fig. 3 have been normalized to enable scaling to scenarios with different levels of (net) demand, in the absence of changes to the spatial and/or temporal character of demand and/or solar and wind resources. For example, if the power generation mix included baseload generation, the results presented in Fig. 3 would apply only to the portion of demand being met by solar and wind (e.g. total demand net of baseload supply). The effect of baseload generation on reliability depends on the correlation between baseload variability and solar and wind variability. If increased levels of baseload generation capacity are available and can be cost-effectively dispatched, the total system reliability would increase, provided that the ratio of solar and wind capacity to net (total -baseload) demand met is constant in both cases. Climate change is expected to alter the spatial and temporal distribution of solar and wind resources and to affect demand through climate's effect on human behavior. The effect of such changes on reliability depends on the correlation of the changes with existing demand and energy resources, and reliability gaps would be exacerbated by further increases in the variability of the wind resource. A quantitative assessment of the magnitude of these changes in reliability requires a detailed analysis within a specific climate and energy supply and demand scenario. Further simulations could be performed based on the model presented here to consider the influence of factors such as the availability of baseload generation, climate change, adoption of electric vehicles, demand management, etc., but such analyses are beyond the scope of this studywhich aims to highlight the interactions among wind, solar, natural gas, and electricity storage in the context of historical weather and demand data. Some scenarios moreover involve large-scale adoption of electric vehicles. Direct charging of battery electric vehicles could produce changes in demand profiles as well as absolute demand levels, and would thus result in different reliability depending on the strategy for implementation of the technology. However, further complexity is introduced if vehicles are also used as energy storage devices for the grid-enabling smoothing of some variability. Additional simulations of such scenarios are thus needed to quantify the impact on reliability and total and marginal costs. If large increases in battery electric vehicles affect only the magnitude of demand (no spatial or temporal change), the effect on reliability and total and marginal costs (see discussion below) would remain unchanged relative to that described herein. If large numbers of hydrogen vehicles are adopted, hydrogen could, in principle, be made through electrolysis and stored for later refueling, providing a buffer between electricity and refueling demands. Such a system could be used to increase reliability by varying the demand profile to better match that of solar and wind generation. In either case, we note that the ability of vehicle-based storage to buffer gaps between supply and demand is limited, because for example discharging 10% of the stored energy in 150 million light duty vehicles in the U.S., each with a total battery storage capacity of 100 kW h, would provide 1.5 TW h of stored energy, sufficient to provide B3 h of mean power demand in the U.S., and hence clearly insufficient to address the majority of the gaps between supply and demand on daily, weekly, or seasonal timescales that affect reliability as evaluated herein. Low-carbon renewables such as biogas and/or bioelectricity, as well as pumped hydroelectricity or geothermal energy could thus prove to be preferable approaches to improving reliability on these timescales relative to vehicle-based or stationary batteries. Each of these configurations represents a distinct combination of infrastructure and future investments, and it may be unnecessary (and economically inefficient) to extensively pursue both large-scale storage and long-distance transmission. In wind-heavy energy generation scenarios, efficient long-distance transmission enables access to wind resources that are often de-correlated from each other, with east-west linkages being especially beneficial. 22 Moreover, these wind resources generally have a higher capacity factor than solar resources, resulting in a lower total installed capacity and potentially less investment, depending on the capital cost differences between solar and wind generation systems. One proposed, and modeled, U.S.-wide transmission system consists of an estimated 34 000 km (21 000 miles; 7 lengths of the US from Los Angeles, CA to Portland, Maine) of line with a capacity of up to 12 GW. 22 An installed cost of $1 MM GW À1 km À1 implies a capital expenditure on the order of $410 billion, as compared to 4$1 trillion that would be required to install 12 hours of storage in the US (mean demand is B450 GW) assuming an installed cost at present of $200 per kW h (pumped hydro; most other systems (e.g. batteries, flywheels, etc.) have current costs in excess of $500 per kW h). 33 Our analysis moreover highlights the geophysically based limits on reliability, and the consequent magnitude and frequency of gaps between supply and demand that would occur if solely wind and solar generation were used in an idealized, lossless continental-scale transmission scenario. In the case of the solar-dominated system, storage would enable smoothing of the daily cycle. At present, energy-storage technologies are limited by high costs and/or a lack of geographically suitable sites (pumped hydro, compressed air). 33 As an extreme example, we consider a scenario in which only wind and solar generation is deployed and only storage is used to increase reliability. For context, storage totaling 12 hours of U.S. mean demand, 5.4 TW h of energy capacity, is B150 years of the annual production 33,34 At $100 per kW h and $500 per kW h, the total capital investment would be $540 billion and $2.7 trillion, respectively. With a 10 year service life, one cycle per day, a linear capacity decline to 80% of rated capacity at the end of storage system life, 92% charge/discharge energy storage efficiency, a 10% discount rate, and no operating costs, a currently representative cost of $500 kW h for a fully installed secondary Li-ion battery system yields a leveled cost of energy storage (LCES) of B$0.25 per kW h. 33,34 Achieving 99.97% reliability with a system consisting solely of solar and wind generation in conjunction with energy storage would require a storage capacity equivalent to several weeks of average demand (Fig. 3b), and the low capacity factor would lead to a LCES of 4$0.25 per kW h. Three weeks of storage (227 TW h) at the cost target of $100 per kW h results in a capital expenditure of $23 trillion and either B6500 years of the annual Tesla Gigafactory production capacity or a B900Â increase in the pumped hydro capacity of the U.S. A mix of 75% solar and 25% wind (by total energy generation contribution) with 3.4Â generation and no storage would have approximately the same reliability (90%) as that same solarwind mix with 1Â generation and 12 hours of storage (Fig. 3). At $1.50 per W (for both solar and wind) and capacity factors of 20% and 38% for solar and wind, respectively, the additional 2.4Â generation would cost an additional B$7.1 trillion. 35 A 25% solar-75% wind mix with 1.2Â generation and no storage would have approximately the same reliability (87%) as that same solar-wind mix that had instead 1Â generation and 12 hours of storage (Fig. 3). At $1.50 per W (for both solar and wind) and capacity factors of 20% and 38% for solar and wind, respectively, the additional 0.2Â generation would cost an additional B$250 billion, but would require long-distance transmission. The trade-offs between generation quantities and hours of storage capacity are thus complex, being dependent on the resource mix and cost trajectories. Approximately a capacity doubling for the 75% solar-25% wind scenario and a capacity quadrupling for the 25% solar-75% wind scenario would be required to achieve 99.97% reliability without storage. With a lifetime of 425 years at $2 per W and a 35% capacity factor, a doubling or quadrupling of generation capacity from 1Â generation to 2Â or 4Â generation would cost additional B$2.5 trillion or B$7.5 trillion, respectively. The costs associated with storage, absent a radically new and low-cost solution, are likely to exceed most other options for overcoming the seasonal challenge. Thus, excess capacity installation (41Â generation), where beneficial, could be a substantially less expensive option for both solar and wind systems, especially for meeting the seasonal challenge. Some previous studies 36 have indicated that 100% of U.S. energy generation could physically be met with a combination of solar and wind, B1Â generation and B3-4 weeks of storage capacity, and concluded that such an energy system can be affordable, with however the assumption of a storage capital cost of $1 per kW h. Our analysis, in accord with other studies over more limited geographic scales and time scales, indicates however that costs rise sharply if more than B80% of total annual U.S. electricity demand is met using solely wind and solar generation in conjunction with storage and transmission, given storage capital cost assumptions (4$100 per kW h) that are consistent with current and near-term projected costs. 22 The difference in conclusions with respect to electricity costs has its origin in the physical characteristics of solar and wind resources that cause the marginal improvements in reliability from additional storage and generation to decrease substantially as reliability increases (Fig. 3), as well as the low capacity factors for storage used infrequently to compensate for the intermittency of the solar and wind resources over long time scales of unmet demand. Conclusions CONUS-scale aggregation of solar and wind power is not sufficient to provide a highly reliable energy system without large quantities of supporting technologies (energy storage, separate carbon-neutral, flexible generators, demand management, etc.). This conclusion stems directly from an analysis of the physical characteristics of solar and wind resources and does not depend on any detailed modeling assumptions. The system architecture required to produce high reliability using primarily solar and wind generation is driven almost entirely by the need to overcome seasonal and weather-driven variability in the solar and wind resources. Achieving high reliability with solar and wind generation contributing 480% of total annual electricity demand will require a strategic combination of energy storage, long-distance transmission, overbuilding of capacity, flexible generation, and demand management. In particular, our results highlight the need for cheap energy storage and/or dispatchable electricity generation. Determination of the most cost-effective strategic combination depends on future costs that are not well-characterized at present. Regardless of the levelized cost of electricity from solar or wind power alone or in combination, our examination of 36 years of weather variability indicates that the primary challenge is to cost-effectively satisfy electricity demand when the sun is not shining and the wind is not blowing anywhere in the U.S. Methods Time-averaged hourly resource data was taken from the NASA developed MERRA-2 reanalysis product, which spans 36 years (1980-2015) and has a resolution of 0.51 latitude by 0.6251 longitude. 31 View Article Online extrapolation errors from the available raw data set. Wind speed magnitudes at each grid point were found using the Pythagorean theorem. Each raw data point (an hourly energy density (solar) or wind speed (wind) value at a specific location and time) was converted into a capacity factor based on a pre-determined power capacity rating for solar and wind generators. The capacity factor describes the actual energy output as compared to the systems' rated energy output (power capacity multiplied by one hour). For solar, the raw data were divided by 1000 W m À2 , which is the industry standard used to rate current solar cells and modules. The wind capacity factor calculation employed a piecewise function consisting of four parts: (i) below a cut-in speed (u ci ) of 3 m s À1 the capacity factor is zero, (ii) between the cut-in speed of 3 m s À1 and rated speed (u r ) of 15 m s À1 the capacity factor is u c1 3 /u r 3 , (iii) between the rated speed of 15 m s À1 and the cut-out speed (u co ) of 25 m s À1 the capacity factor is 1.0 and (iv) above the cut-out speed of 25 m s À1 the capacity factor is zero. 38 An area-weighted mean hourly energy generation profile was created for the solar and wind resources individually for a region of interest. The native MERRA-2 projection was used to calculate the area ratios, although because the units are nonstandard, a different method was used for calculating the absolute areas in square kilometers. For Fig. 2, two methods were used to calculate the area of the regions; for states and the contiguous U.S., the actual values were tabulated and used, while for the NERC regions, the MERRA-2 grid was re-projected from the native WGS84 (EPSG: 4326) coordinate system to the Northern American Lambert Conformal Conic (ESRI: 102009) coordinate system. The size of the NERC regions required a relatively large projection area and thus the accuracy of the area calculations was off by B10%. The values were systematically low, so each calculated area was corrected by dividing by 0.9. The installed solar and wind capacities were calculated based on a specified resource mix (X% of energy generated is from solar, (100 À X)% of energy generated is from wind), the hourly resource data and the generation value. The generation value is defined as the energy produced by solar and wind divided by the total energy demand over all 36 years, and is shown as a multiplier (e.g. 1Â generation means energy generated over the 36 year period is equal to energy demanded). Capacity factors derived from reanalysis data are known to differ from real-world systems, and thus these calculated capacities were used only as scaling values to produce the desired amount of generation. 4,39 Accordingly, the reanalysis data were used herein only for the temporal and spatial characteristics, which have been shown to be in good agreement with observational data (wind speeds). 4 The normalized capacity values shown in Fig. 3, and other related supplementary results, were calculated using real-world capacity factors for solar and wind systems (CF solar = 20%, CF wind = 38%) and the generation values. Hourly NERC-wide (includes small contributions from some Canadian and Mexican regions) electricity demand data were taken from EIA for July 4, 2015-July 3, 2016. 40 These dates were chosen because they produced a smooth transition from July 3, 2016 to July 4, 2015. The data were ordered from Jan 1 to Dec 31 irrespective of year, and were joined together 36 times to form a 36 year record consistent with the resource data. Some errors existed in the data set, and the most gross errors were corrected (see ESI † for full log of corrections). A single year of demand data was used because a self-consistent continuous gapless time series of U.S.-wide hourly electricity demand is not readily available for the long time periods over which the reanalysis data allows extraction of geophysical information on the solar and wind resource. Correlation analysis between the solar and wind resource and the year of available demand data (2015) demonstrates that this year is not an outlier among all other analyzed years (see SOM), indicating that our use of a single year of demand data should not substantially affect our analysis of geophysical reliability over the multi-decadal period. Given the resource data, installed capacities and electricity demand, a forward running simulation was performed to track solar and wind generation, charging or discharging of storage, if present, and the ability of wind, solar and energy storage to meet demand in every hour (a flow diagram of the algorithm is shown in Fig. S14, ESI †). Storage was charged with excess solar and wind generation, if available, until the storage was full, after which solar and wind generation was curtailed. The storage was discharged until empty, if demand exceeded solar and wind generation. The decision to discharge was based solely on the current hour, and completely filled the difference between demand and solar and wind generation, provided that sufficient energy was present in the storage system. The ability to forecast future solar and wind generation and demand may allow dispatch of storage in a more uniform fashion, thereby minimizing the need for other backup capacity and/or demand management. When storage was assumed to be available, the initial state was set to empty, but the simulation looped back at the end of the 36 year period, preserving the end-storage state as the initial state, and running forward from the beginning until no change in storage state was detected as compared to the previous loop. This method ensured that the initial condition did not affect the results. Perfect storage (100% efficient and no charge/discharge rate limits) and perfect transmission (area over which resource was aggregated had no transmission constraints or losses, i.e. copper plate assumption) were assumed. Permutations were run for different storage capacities, generation values, areas of resource aggregation and resource mixes. The NERC-wide demand data were used for all simulation runs, although for smaller areas this demand profile may deviate from the local demand profile. The Python code used for this study can be found in the ESI. † Conflicts of interest There are no conflicts to declare.
Comprehensive Study on the Performance of Waste HDPE and LDPE Modified Asphalt Binders for Construction of Asphalt Pavements Application This research is aimed at investigating the mechanical behavior of the bitumen by the addition of high-density polyethylene (HDPE) and low-density polyethylene (LDPE) obtained from waste plastic bottles and bags. Polymers (HDPE and LDPE) with percentages of 0%, 2%, 4%, and 6% in shredded form by weight of bitumen were used to evaluate the spectroscopic, structural, morphological, and rheological properties of polymer-modified binders. The rheological properties for different factors; viscosity (ἠ) from Rotational Viscometer (RV), rutting factor G*/Sin (δ), fatigue characteristics G*. Sin (δ), for the modified binder from dynamic shear rheometer (DSR), Short and long-term aging from rolling thin film oven (RTFO), and pressure aging vessel (PAV) was determined. The thermal characteristics, grain size, and texture of polymers for both LDPE and HDPE were found using bending beam rheometer (BBR) and X-ray diffraction (XRD), respectively. Fourier transform infrared (FTIR) analysis revealed the presence of polymer contents in the modified binder. Scanning electron microscopy (SEM) images revealed the presence of HDPE and LDPE particles on the surface of the binder. Creep Rate (m) and Stiffness (S) analysis in relationship with temperature showed a deduction in stress rate relaxation. Results have revealed the best rutting resistance for 6% HDPE. It also showed an improvement of 95.27% in G*/Sin (δ) which increased the performance of the bituminous mix. Similarly, the addition of 4% LDPE resulted in maximum dynamic viscosity irrespective of the temperatures. Moreover, fatigue resistance has shown a significant change with the HDPE and LDPE. The festinating features of waste plastic modified binder make it important to be used in the new construction of roads to address the high viscosity and mixing problems produced by plastic waste and to improve the performance of flexible pavements all over the world. Introduction The heavy traffic and loading over time adversely affect the rheological performance of the flexible pavements. The upper part of the flexible pavements is mainly composed of bitumen, aggregates, and filler materials to bear the stresses imposed by adverse traffic Introduction The heavy traffic and loading over time adversely affect the rheological performance of the flexible pavements. The upper part of the flexible pavements is mainly composed of bitumen, aggregates, and filler materials to bear the stresses imposed by adverse traffic conditions. Bitumen as a binding material is made up of hydrocarbons and has a strong influence on the performance of asphalt pavements [1]. The rheological properties of bitumen can enhance the structural and functional performance of asphalt pavements [2]. Material characterization is one of the major factors affecting pavement design. Supposing the materials are unable to provide better resistance to fatigue and permanent deformation; in this case, distress may likely occur in asphalt pavements, which may affect the structural and functional properties of asphalt pavements. When the performance of the pavement reaches a level where the desired function of the pavement is no longer available, or the pavement is not optimally providing, the desired service is termed as failure. In flexible pavements, three major types of distress cause failure in the structure, i.e., rutting, fatigue cracking, and to an extent, thermal cracking [3]. For the last 40 years, researchers have been putting their efforts to modify bitumen by employing the addition of polymers to enhance its physical and rheological properties [4]. For developing countries such as Pakistan, the addition of waste plastic in bituminous materials is one of the easiest available resources to strengthen the bitumen and reduce the environmental hazards produced by these wastes. According to the Environmental Protection Agency (EPA), Pakistan has produced 3.9 million tonnes of plastic waste during the year 2020, in which 70 percent (2.6 million tonnes) was mismanaged. According to the EPA, this amount of plastic waste may be tripled by 2040 [5]. The modified bitumen obtained from waste plastic is used as either elastomers or plastomers [6,7]. The rheological properties and viscosity function can be highly influenced by the addition of elastomers and plastomers from 2% to 6% by the weight of bitumen [8]. The process of adding bitumen with different polymers is termed a wet process [9]. Depending on the nature, size, the type of equipment used for mixing, and the shape of polymers [10], several properties such as resistance to permanent deformation, resistance to moisture-related distresses, fatigue life, and the achievement of high stiffness at a high temperature, can be improved by adding waste plastics [11][12][13][14]. By substituting the HDPE and LDPE, the thinner pavement cross-sections can be developed more efficiently due to their mechanical strength and chemical compatibility with the original binder [15]. The addition of nano-silica from 1% to 6% by weight of bitumen in the polymermodified binder has improved its viscoelastic properties and the rutting factor G*/Sin (δ) and fatigue characteristics G*.Sin (δ) [16]. Styrene-butadiene-styrene (SBS) polymer modified asphalt mixtures and analysis of strain distribution showed that SBS polymer modified bitumen is improving the stress levels within the asphalt mastic [17][18][19][20]. The low percentage of SBS revealed the dispersion of polymers particles in a continuous phase of bitumen, as the original binder has swollen the small polymer's globules. As a result, compatible fractions are spread in the homogenous form in a continuous phase of bitumen [17]. In another investigation of adding 2% to 6% nano-silica in polymer-modified bitumen, results showed a delay in the aging of bitumen. The rutting and fatigue parameters were improved along with the viscoelastic properties of a polymer-modified binder [21,22]. Aging and regeneration can change the microstructural performance and morphological performance of bitumen [23][24][25]. The microstructures investigations of SBS polymer modified binder in addition to Montmorillonite (MMT) using X-ray diffraction (XRD) and FTIR indicated that the softening point and aging index have decreased due to the addition of Na+. The introduction of Na+ in SBS polymer-modified binder has created a phase-separated structure [26]. The surface properties of polymer-modified bitumen were determined using scanning electron microscopy and AT-FTIR. The contact angle showed a decrease from 107.7 • to 4.7 • while the oxygen atomic percentage was increased from 7.12% to 13.15%, respectively. This exhibits the chemical interaction of the polymer-modified binders. Chemical capabilities in terms of oxidation in aged bitumen can be seen without the evolution of interactions [27]. Although the addition of nano-silica and SBS has its advantages in polymer modification there is still a lack of research on the affinity of polymers with bitumen. At a low strain level, there is no slippage of polymer chains or change in morphological properties but at a higher strain and higher cyclic loads, the research question still remains uncertain [28]. Another study for investigating the physical and rheological properties of asphalt binders was done by introducing the nanoparticles of aluminum oxide (Al 2 O 3 ) by weight of 3%, 5%, and 7% bitumen. The addition of 5% aluminum oxide by weight of bitumen, reduced the phase angle (δ), and the complex shear modulus (G*) was increased, respectively [29]. However, the bond between nanoparticles and bitumen still needs further investigation in terms of morphological analysis. Although extensive research has been conducted so far on polymer-modified bitumen, there is a need for improvement in the polymer in terms of its rheology, morphology, and creep assessment [30]. Recently, researchers have reported some critical issues with using polymers as modifiers utilizing the wet process. These issues include the mixing problems related to high viscosity at the higher temperature, the affinity of polymers with bitumen, and the high cost of its modification [9]. Due to the ample amount of plastic waste available in Pakistan, this research uses two waste plastics, i.e., HDPE obtained from waste bottles and LDPE obtained from waste plastic bags in shredded form, intending to investigate the above critical issues. The morphological properties of neat binders have been compared with HDPE and LDPE modified binders in order to predict the bond linkage between neat binders and polymers. The performance of a neat and modified binder has been investigated in the Section 2.4.2 rheological investigation section. Dynamic Viscosity measurements have discussed the high viscosity and mixing issues of adding polymers during high temperatures. The comparison of the stiffness for neat and modified binders has been discussed in creep analysis. The schematic representation of the study has been shown in Figure 1. strain level, there is no slippage of polymer chains or change in morphological properties but at a higher strain and higher cyclic loads, the research question still remains uncertain [28]. Another study for investigating the physical and rheological properties of asphalt binders was done by introducing the nanoparticles of aluminum oxide (Al2O3) by weight of 3%, 5%, and 7% bitumen. The addition of 5% aluminum oxide by weight of bitumen, reduced the phase angle ( ), and the complex shear modulus (G*) was increased, respectively [29]. However, the bond between nanoparticles and bitumen still needs further investigation in terms of morphological analysis. Although extensive research has been conducted so far on polymer-modified bitumen, there is a need for improvement in the polymer in terms of its rheology, morphology, and creep assessment [30]. Recently, researchers have reported some critical issues with using polymers as modifiers utilizing the wet process. These issues include the mixing problems related to high viscosity at the higher temperature, the affinity of polymers with bitumen, and the high cost of its modification [9]. Due to the ample amount of plastic waste available in Pakistan, this research uses two waste plastics, i.e., HDPE obtained from waste bottles and LDPE obtained from waste plastic bags in shredded form, intending to investigate the above critical issues. The morphological properties of neat binders have been compared with HDPE and LDPE modified binders in order to predict the bond linkage between neat binders and polymers. The performance of a neat and modified binder has been investigated in the rheological investigation section. Dynamic Viscosity measurements have discussed the high viscosity and mixing issues of adding polymers during high temperatures. The comparison of the stiffness for neat and modified binders has been discussed in creep analysis. The schematic representation of the study has been shown in Figure 1. Materials The fundamental materials used in this research work were bitumen, high-density polyethylene (HDPE), and low-density polyethylene (LDPE) obtained from a local depository in Peshawar Pakistan. The waste plastic bottles and bags were first washed, dried, and shredded to pass through sieve No.4 [30]. The performance and penetration grade of bitumen was PG 58-22 and grade 60/70, respectively, obtained from Attock Oil Refinery Limited (ARL), Pakistan. Table 1 represents the physical properties of the asphalt binder, while the physical properties of HDPE and LDPE used in this investigation are presented in Table 2. Performance Grade Testing Performance grade (PG) of conventional and HDPE and LDPE modified bitumen was determined by following AASHTO M 320 specifications, as is presented in Table 3. Preparation of PE Modified Bitumen The HDPE and LDPE were mixed separately with bitumen in an agitator at 3000 rpm after applying the heat treatment (163 • C for 1 h in an oven). The HDPE and LDPE were mixed separately with different percentages of 2%, 4%, and 6% by weight of bitumen. The mixture was reheated at 163 • C after the addition of the modifier for 10 min. Testing Procedures 2.4.1. SEM, FTIR, and XRD Analysis As the chemistry of the polymers is quite different from that of bitumen, the interaction of HDPE and LDPE with bitumen was determined using different characterization tools. The FTIR spectroscopy was done to identify the functional group in neat and polymermodified binders. The specifications followed for high-and low-density polyethylene were AASHTO T 302-15. The chemical bonding of HDPE and LDPE with bitumen was investigated by FTIR (Perkin Elmer L1600). This test was performed in the wavenumber range (4000-400) cm −1 by using the KBr technique. For performing the FTIR, the testing temperature was 25 degrees Celsius. The bond concentration or the identification of the functional group can be identified by the intensity of the peak spectra in FTIR [31]. The polymer's interaction with the neat bitumen enhances the morphological properties of the HDPE and LDPE modified binder and needs to be understood using scanning electron microscopy (SEM-JEOL JSM 5910). The XRD analysis of neat bitumen and modified binder has been studied to examine their structural features. The crystalline features of the materials were studied by using a dedicated software, Bruker AXS, D8 advance. X-ray diffraction (XRD) was used to investigate the diffraction patterns, grain size, and texture of the modified bitumen. The X-Ray Diffraction, abbreviated commonly as XRD, is a non-destructive research tool used for the study of crystalline material structure. It studies the structure of the crystal used to classify the crystalline phases contained in a substance and thus discloses information about the chemical composition of the material [32]. Rheological Investigation The rheological assessments were carried out to determine the viscosities, rutting resistance along with short-and long-term aging of HDPE and LDPE modified asphalt binders using the dynamic shear rheometer (DSR AASHTO T 315). All tests were performed at different temperatures (58 • C, 64 • C, 70 • C, and 76 • C) following the specifications of (AASHTO T 240-09) for short-term aging RTFO-aged and (ASTM D 6521) for long-term aging PAV-aged, the rutting factor G*/Sin (δ) and fatigue characteristics G*Sin (δ) of original and polymers modified binders were determined. The frequency of testing samples was 10 rad/s according to the standard specifications. According to the (AASHTO T-315), the temperature was maintained at 58 • C, 64 • C, 70 • C, and 76 • C for rutting resistance determination. The original samples for G*/Sin (δ) were tested from a minimum value of 1.0 KPa and RTFO aged binder samples at a minimum value of 2.2 KPa at maximum temperature. However, the minimum value of G*. Sin (δ) was 5000 KPa. Schematics of rheological investigations of neat and modified bitumen are depicted in Figure 2 [33][34][35]. As the chemistry of the polymers is quite different from that of bitumen, the interaction of HDPE and LDPE with bitumen was determined using different characterization tools. The FTIR spectroscopy was done to identify the functional group in neat and polymer-modified binders. The specifications followed for high-and low-density polyethylene were AASHTO T 302-15. The chemical bonding of HDPE and LDPE with bitumen was investigated by FTIR (Perkin Elmer L1600). This test was performed in the wavenumber range (4000-400) cm −1 by using the KBr technique. For performing the FTIR, the testing temperature was 25 degrees Celsius. The bond concentration or the identification of the functional group can be identified by the intensity of the peak spectra in FTIR [31]. The polymer's interaction with the neat bitumen enhances the morphological properties of the HDPE and LDPE modified binder and needs to be understood using scanning electron microscopy (SEM-JEOL JSM 5910). The XRD analysis of neat bitumen and modified binder has been studied to examine their structural features. The crystalline features of the materials were studied by using a dedicated software, Bruker AXS, D8 advance. X-ray diffraction (XRD) was used to investigate the diffraction patterns, grain size, and texture of the modified bitumen. The X-Ray Diffraction, abbreviated commonly as XRD, is a non-destructive research tool used for the study of crystalline material structure. It studies the structure of the crystal used to classify the crystalline phases contained in a substance and thus discloses information about the chemical composition of the material [32]. Rheological Investigation The rheological assessments were carried out to determine the viscosities, rutting resistance along with short-and long-term aging of HDPE and LDPE modified asphalt binders using the dynamic shear rheometer (DSR AASHTO T 315). All tests were performed at different temperatures (58 °C, 64 °C, 70 °C, and 76 °C) following the specifications of (AASHTO T 240-09) for short-term aging RTFO-aged and (ASTM D 6521) for long-term aging PAV-aged, the rutting factor G*/Sin ( ) and fatigue characteristics G*Sin ( ) of original and polymers modified binders were determined. The frequency of testing samples was 10 rad/s according to the standard specifications. According to the (AASHTO T-315), the temperature was maintained at 58 °C, 64 °C, 70 °C, and 76 °C for rutting resistance determination. The original samples for G*/Sin ( ) were tested from a minimum value of 1.0 KPa and RTFO aged binder samples at a minimum value of 2.2 KPa at maximum temperature. However, the minimum value of G*. Sin ( ) was 5000 KPa. Schematics of rheological investigations of neat and modified bitumen are depicted in Figure 2 [33][34][35]. Short-Term and Long-Term Aging of a Binder The Rolling Thin Film Oven Test (RTFO) was performed to discover the short-term aging of bitumen according to the standard specifications of AASHTO T-240. The total duration of the test was 85 min and the temperature was 163 • C in the RTFO bottle [34]. The volatile particles involved in bitumen cause short-term aging. For long-term aging, the sample was put into the plates and Pressure Aging Vessel (PAV) for 20 h. The PAV can predict the binder up to 10 years of its service life. Further physical testing was performed by storing the binder in cans. Dynamic Viscosity Test The rotational viscometer (RVDV-111) was used to find out the dynamic viscosity of original and modified samples. The percentages of polymer-modified bitumen were from 2% to 6% by weight of bitumen. According to the specifications of AASHTO T-316, the high temperature for the viscosity test was from 135 • C to 165 • C. The changing interval of temperature was 10 • C, respectively. The rotational speed of the cylindrical spindle was 20 rpm [36]. Bending Beam Rheometer Tests for Creep The low-temperature cracking or thermal cracking was measured by BBR. According to the specifications of AASHTO T-313 at three different temperatures of 0 • C, −6 • C, and −12 • C, the BBR test was performed. The length, thickness, and width of the beams of bitumen were 127 mm, 6.4 mm, and 12.7 mm, respectively. The temperature was constantly maintained for 60 min. After the preloading conditions, a 100 g load was applied to the rectangular beam at a constant rate for measuring the deflection at the center [37]. The loading time was from 8 to 240 s to discover the creep stiffness (S) and creep rate (m). Schematics for creep rate (m) and creep stiffness (S) are shown in Figure 3. duration of the test was 85 min and the temperature was 163 °C in the RTFO bottle [34]. The volatile particles involved in bitumen cause short-term aging. For long-term aging, the sample was put into the plates and Pressure Aging Vessel (PAV) for 20 h. The PAV can predict the binder up to 10 years of its service life. Further physical testing was performed by storing the binder in cans. Dynamic Viscosity Test The rotational viscometer (RVDV-111) was used to find out the dynamic viscosity of original and modified samples. The percentages of polymer-modified bitumen were from 2 to 6% by weight of bitumen. According to the specifications of AASHTO T-316, the high temperature for the viscosity test was from 135 °C to 165 °C. The changing interval of temperature was 10 °C, respectively. The rotational speed of the cylindrical spindle was 20 rpm [36]. Bending Beam Rheometer Tests for Creep The low-temperature cracking or thermal cracking was measured by BBR. According to the specifications of AASHTO T-313 at three different temperatures of 0 °C, −6 °C, and −12 °C, the BBR test was performed. The length, thickness, and width of the beams of bitumen were 127 mm, 6.4 mm, and 12.7 mm, respectively. The temperature was constantly maintained for 60 min. After the preloading conditions, a 100 g load was applied to the rectangular beam at a constant rate for measuring the deflection at the center [37]. The loading time was from 8 to 240 s to discover the creep stiffness (S) and creep rate (m). Schematics for creep rate (m) and creep stiffness (S) are shown in Figure 3. Scanning Electron Microscopy The scanning electron microscopy (SEM) of controlled and HDPE and LDPE modified bitumen is shown in Figure 4. The micro-cracks that appeared on the surface of a neat binder can be seen in Figure 4a. This crack growth may be increased under the cyclic loading imposed by the heavy traffic and will cause severe microstructural disorder in flexible pavements [25]. As the molecular chains of the 6%, HDPE interacts with the neat binder as shown in Figure 4b; the surface of the modified bitumen became smooth and showed no appearance of micro-cracks on the surface. The continuous matrix of polymers, with Scanning Electron Microscopy The scanning electron microscopy (SEM) of controlled and HDPE and LDPE modified bitumen is shown in Figure 4. The micro-cracks that appeared on the surface of a neat binder can be seen in Figure 4a. This crack growth may be increased under the cyclic loading imposed by the heavy traffic and will cause severe microstructural disorder in flexible pavements [25]. As the molecular chains of the 6%, HDPE interacts with the neat binder as shown in Figure 4b; the surface of the modified bitumen became smooth and showed no appearance of micro-cracks on the surface. The continuous matrix of polymers, with the addition of 6% by weight of bitumen, is clearly visible in Figure 4b. The packed surface of HDPE-modified bitumen without micro-cracks may provide better resistance against rutting and fatigue characteristics caused by the heavy loading in flexible pavements. As depicted in Figure 4c, some of the micro-cracks still appeared with the same amount of LDPE (6%) added by the weight of bitumen. With the addition of 6%, LDPE the discontinuous matrix of LDPE is observed. Similarly, phase dispersion of bitumen can be seen when the percentage of LDPE is greater than 6%. of HDPE-modified bitumen without micro-cracks may provide better resistance against rutting and fatigue characteristics caused by the heavy loading in flexible pavements. As depicted in Figure 4c, some of the micro-cracks still appeared with the same amount of LDPE (6%) added by the weight of bitumen. With the addition of 6%, LDPE the discontinuous matrix of LDPE is observed. Similarly, phase dispersion of bitumen can be seen when the percentage of LDPE is greater than 6%. Fourier Transform Infrared (FTIR) Analysis The FTIR analysis for neat bitumen, HDPE, and LDPE modified bitumen is presented in Figure 5 below. The HDPE and LDPE modified bitumen showed the emergence of peaks and the presence of OH groups shown in Figure5. Other peaks developed were due to the absorption of HDPE and LDPE in bitumen. The overtones around 2000-1800 cm −1 can be clearly seen in Figure 5, which confirms the presence of aromatic carbons. The C-H stretching of organic compounds represented in the range of the peaks in Figure 5 is between 2850-3000 cm −1 [32][33][34][35][36][37][38]. Values for the other peaks in Figure 5 are between 3000 cm −1 and 1500 cm −1 , proving the presence of polar groups. The polar group is relatively responsible for creating a stronger bond between a neat binder and dispersed polymer [39]. The bending of the C-H group can be seen at 1570 cm −1 . It can also be noted in the figure that the neat HDPE act as apolar or production of alkene or olefin with a highly crystalline structure. An alkene or olefin is a type of unsaturated molecule that contains one carbon to carbon double bond. As compared to the HDPE, the LDPE is less apolar and has a less crystalline structure. Due to the high crystallinity found inside the polymers, there will be a sufficient amount of improvement in the blended mix of polymers with neat bitumen [40]. Fourier Transform Infrared (FTIR) Analysis The FTIR analysis for neat bitumen, HDPE, and LDPE modified bitumen is presented in Figure 5 below. The HDPE and LDPE modified bitumen showed the emergence of peaks and the presence of OH groups shown in Figure 5. Other peaks developed were due to the absorption of HDPE and LDPE in bitumen. The overtones around 2000-1800 cm −1 can be clearly seen in Figure 5, which confirms the presence of aromatic carbons. The C-H stretching of organic compounds represented in the range of the peaks in Figure 5 is between 2850-3000 cm −1 [32][33][34][35][36][37][38]. Values for the other peaks in Figure 5 are between 3000 cm −1 and 1500 cm −1 , proving the presence of polar groups. The polar group is relatively responsible for creating a stronger bond between a neat binder and dispersed polymer [39]. The bending of the C-H group can be seen at 1570 cm −1 . It can also be noted in the figure that the neat HDPE act as apolar or production of alkene or olefin with a highly crystalline structure. An alkene or olefin is a type of unsaturated molecule that contains one carbon to carbon double bond. As compared to the HDPE, the LDPE is less apolar and has a less crystalline structure. Due to the high crystallinity found inside the polymers, there will be a sufficient amount of improvement in the blended mix of polymers with neat bitumen [40]. X-Rays Diffraction (XRD) Analysis As can be seen in Figure 6, the black line represents the neat bitumen that is not crystallized. The peak 2θ degree for HDPE in addition to bitumen was 26.71° on a scale of 0 to 60 with a scan speed of 2 deg/min with an intensity of 5000 (a.u). The LDPE in addition to bitumen 25.20° with an intensity of 4189 (a.u). As can be seen from Figure 6, at the lower angle the modified bitumen shows a semi-crystalline phase. As the percentage of absorption increases the crystalline phase also increases. Previous research has reported the crystalline behavior of LDPE at 4% by weight of bitumen showed a significant decrease in permanent deformation, fatigue, and thermal crack resistance at high, intermediate, and low temperatures [41]. Another research confirmed that the highly crystalline bitumenmodified mixture can improve the thermal stability of the mixture which has a direct effect on the rheological properties of the asphalt mixture [42]. The degree of crystallinity for HDPE is more as compared to the LDPE but at a higher temperature in the wet mixing process, the HDPE has a high viscosity and mixing issues with the neat binder. From the conducted XRD on neat and modified bitumen, it can be concluded that the crystalline structure of the polymers and their modification enhance a key characterization of the chemical properties of the modified bitumen. Through the addition of polymers, there is an improvement in the elastic properties of a modified binder. As a result, it will provide better resistance to permanent deformation. X-Rays Diffraction (XRD) Analysis As can be seen in Figure 6, the black line represents the neat bitumen that is not crystallized. The peak 2θ degree for HDPE in addition to bitumen was 26.71 • on a scale of 0 to 60 with a scan speed of 2 deg/min with an intensity of 5000 (a.u). The LDPE in addition to bitumen 25.20 • with an intensity of 4189 (a.u). As can be seen from Figure 6, at the lower angle the modified bitumen shows a semi-crystalline phase. As the percentage of absorption increases the crystalline phase also increases. Previous research has reported the crystalline behavior of LDPE at 4% by weight of bitumen showed a significant decrease in permanent deformation, fatigue, and thermal crack resistance at high, intermediate, and low temperatures [41]. Another research confirmed that the highly crystalline bitumenmodified mixture can improve the thermal stability of the mixture which has a direct effect on the rheological properties of the asphalt mixture [42]. The degree of crystallinity for HDPE is more as compared to the LDPE but at a higher temperature in the wet mixing process, the HDPE has a high viscosity and mixing issues with the neat binder. From the conducted XRD on neat and modified bitumen, it can be concluded that the crystalline structure of the polymers and their modification enhance a key characterization of the chemical properties of the modified bitumen. Through the addition of polymers, there is an improvement in the elastic properties of a modified binder. As a result, it will provide better resistance to permanent deformation. Rheological Performance Analysis From 2% to 6% addition of HDPE and LDPE, the rheological performance of neat binder, i.e., PG 58-22 has been compared. Figure 7a,b depicted the rheological performance of the unaged binder. The temperatures 58, 64, 70, and 76 °C were used for finding the rutting factor i.e., G*/Sin ( ). The addition of HDPE and LDPE depicted the increase in G*/Sin ( ). As a result, the rutting factor was increased with the addition of 2%, 4%, and 6% HDPE and LDPE. The phase angle (θ) for the temperature of 58, 64, 70, and 76 °C was determined using DSR. The rutting resistance as compared to the neat binder was increased with the addition of HDPE and LDPE. High values of rutting parameters, i.e., G*/Sin ( ) may decrease the chances of rutting in asphalt, also summarized by previous research [43,44]. As compared to the original binder after the addition of polymers in Figure 7a, the values of 480, 940, and 1970 Pa showed 13.8%, 26.8%, and 56.2% improvement in rutting resistance. The temperature was maintained at 64 °C for adding HDPE from 2 to 6% in the bitumen. Figure 7b depicts the results of LDPE, and it can be noted that the values 500, 1100, and 2250 Pa showed 19.23%, 42.3%, and 86.5% improvement in rutting resistance. After conducting the short-term aging test, Table 4 shows the values obtained from RTFO residues reused in DSR for different phase angles. As the percentage of polymers increases the values of phase angle decrease. Moreover, after the conduction of the Long-term aging test, Table 5 showed the values obtained from PAV residues reused in DSR for different phase angles at 25 °C. Figure 8a,b depicted the results for long-term aging after the addition of HDPE and LDPE. It can also be noted that according to Table 5 the percentage of polymers increases the phase angle decreases, which shows an increase in elastic properties of the bitumen. Rheological Performance Analysis From 2% to 6% addition of HDPE and LDPE, the rheological performance of neat binder, i.e., PG 58-22 has been compared. Figure 7a,b depicted the rheological performance of the unaged binder. The temperatures 58, 64, 70, and 76 • C were used for finding the rutting factor i.e., G*/Sin (δ). The addition of HDPE and LDPE depicted the increase in G*/Sin (δ). As a result, the rutting factor was increased with the addition of 2%, 4%, and 6% HDPE and LDPE. The phase angle (θ) for the temperature of 58, 64, 70, and 76 • C was determined using DSR. The rutting resistance as compared to the neat binder was increased with the addition of HDPE and LDPE. High values of rutting parameters, i.e., G*/Sin (δ) may decrease the chances of rutting in asphalt, also summarized by previous research [43,44]. As compared to the original binder after the addition of polymers in Figure 7a, the values of 480, 940, and 1970 Pa showed 13.8%, 26.8%, and 56.2% improvement in rutting resistance. The temperature was maintained at 64 • C for adding HDPE from 2 to 6% in the bitumen. Figure 7b depicts the results of LDPE, and it can be noted that the values 500, 1100, and 2250 Pa showed 19.23%, 42.3%, and 86.5% improvement in rutting resistance. After conducting the short-term aging test, Table 4 shows the values obtained from RTFO residues reused in DSR for different phase angles. As the percentage of polymers increases the values of phase angle decrease. Moreover, after the conduction of the Long-term aging test, Table 5 showed the values obtained from PAV residues reused in DSR for different phase angles at 25 • C. Figure 8a,b depicted the results for long-term aging after the addition of HDPE and LDPE. It can also be noted that according to Table 5 the percentage of polymers increases the phase angle decreases, which shows an increase in elastic properties of the bitumen. The phase angle of pure polymer content is less than the neat binder, irrespective of the temperature conditions. Irrespective of the temperature change, the reduction of phase angle is due to the rich phase of the polymer that behaves like an elastic filler in a neat binder matrix [45]. This shows that the polymer's addition improves the high-temperature performance of asphalt mortar. A decrease in phase angle (θ) was observed with the usage of the polymers, as the phase angle is acting as a lag between the applied shear stress and resulting in the shear strain. The lesser the phase angle, the more asphalt mortar can resist permanent deformation, as shown in Tables 4 and 5, respectively. The phase angle of pure polymer content is less than the neat binder, irrespective of the temperature conditions. Irrespective of the temperature change, the reduction of phase angle is due to the rich phase of the polymer that behaves like an elastic filler in a neat binder matrix [45]. This shows that the polymer's addition improves the high-temperature performance of asphalt mortar. A decrease in phase angle (θ) was observed with the usage of the polymers, as the phase angle is acting as a lag between the applied shear stress and resulting in the shear strain. The lesser the phase angle, the more asphalt mortar can resist permanent deformation, as shown in Tables 4 and 5, respectively. Dynamic Viscosity Analysis The results of viscosity for HDPE and LDPE are shown in Figure 9a,b, respectively. The increase in the viscosity values showed a marked difference with the addition of polymers. The maximum dynamic viscosity is obtained by adding 2% of HDPE at all the temperatures, as shown in Figure 9a. The percentage of Dynamic viscosity increases at a temperature of 135, 145 155, and 165 • C was 9.67%, 15.8%, 25%, and 18.1%, respectively. The maximum dynamic viscosity is obtained by adding 4% of LDPE at all the temperatures, as shown in Figure 9b. The percentage of Dynamic viscosity increases at temperatures of 135.0, 145.0, 155.0 and 165.0 • C was 5.90%, 21.80%, 5.0% and 18.10%, respectively [36]. Dynamic Viscosity Analysis The results of viscosity for HDPE and LDPE are shown in Figure 9a,b, respectively. The increase in the viscosity values showed a marked difference with the addition of polymers. The maximum dynamic viscosity is obtained by adding 2% of HDPE at all the temperatures, as shown in Figure 9a. The percentage of Dynamic viscosity increases at a temperature of 135, 145 155, and 165 °C was 9.67%, 15.8%, 25%, and 18.1%, respectively. The maximum dynamic viscosity is obtained by adding 4% of LDPE at all the temperatures, as shown in Figure 9b. The percentage of Dynamic viscosity increases at temperatures of 135.0, 145.0, 155.0 and 165.0 °C was 5.90%, 21.80%, 5.0% and 18.10%, respectively [36]. (a) (b) Figure 9. (a) The relation of dynamic viscosity with temperature for HDPE modified bitumen, (b) relation of dynamic viscosity with temperature for LDPE modified bitumen. Figure 9. (a) The relation of dynamic viscosity with temperature for HDPE modified bitumen, (b) relation of dynamic viscosity with temperature for LDPE modified bitumen. Creep Rate and Creep Stiffness Analysis By applying the loading time of 60 s for HDPE and LDPE modified bitumen, creep rate (m) and creep stiffness (S) were found using the software as depicted in Figures 10 and 11. The range for (m) should be greater than 0.300 while the (S) should below 300 MPa [46]. The relation between temperature and creep rate (m) for HDPE and LDPE can be seen in Figure 10a,b. The trend of the graph was decreasing at −6 • C for LDPE, but the HDPE was according to the desired criterion. rate (m) and creep stiffness (S) were found using the software as depicted in Figure 10a-11b. The range for (m) should be greater than 0.300 while the (S) should below 300 MPa [46]. The relation between temperature and creep rate (m) for HDPE and LDPE can be seen in Figure 10a,b. The trend of the graph was decreasing at −6 O C for LDPE, but the HDPE was according to the desired criterion. Figure 11a,b represented the relationship between S and temperature for HDPE and LDPE, respectively. The polymer-modified bitumen becomes stiffer at low temperatures. At low temperatures, adding both HDPE and LDPE at 6% has shown higher creep stiffness values than the original binder, i.e., 185 MPa [47]. The values of S have a relationship with thermal stress, which is developed in pavements that undergo shrinkage. Hence, there could be thermal cracking in the pavement structure. The stress rate has a relationship with the (m) values; whenever it decreases, the relaxation in the stress rate also decreases. So, it can be concluded that for long-term pavement performance Low (S) values and Higher (m) values are required. By applying the loading time of 60 s for HDPE and LDPE modified bitumen, creep rate (m) and creep stiffness (S) were found using the software as depicted in Figure 10a-11b. The range for (m) should be greater than 0.300 while the (S) should below 300 MPa [46]. The relation between temperature and creep rate (m) for HDPE and LDPE can be seen in Figure 10a,b. The trend of the graph was decreasing at −6 O C for LDPE, but the HDPE was according to the desired criterion. Figure 11a,b represented the relationship between S and temperature for HDPE and LDPE, respectively. The polymer-modified bitumen becomes stiffer at low temperatures. At low temperatures, adding both HDPE and LDPE at 6% has shown higher creep stiffness values than the original binder, i.e., 185 MPa [47]. The values of S have a relationship with thermal stress, which is developed in pavements that undergo shrinkage. Hence, there could be thermal cracking in the pavement structure. The stress rate has a relationship with the (m) values; whenever it decreases, the relaxation in the stress rate also decreases. So, it can be concluded that for long-term pavement performance Low (S) values and Higher (m) values are required. Figure 11a,b represented the relationship between S and temperature for HDPE and LDPE, respectively. The polymer-modified bitumen becomes stiffer at low temperatures. At low temperatures, adding both HDPE and LDPE at 6% has shown higher creep stiffness values than the original binder, i.e., 185 MPa [47]. The values of S have a relationship with thermal stress, which is developed in pavements that undergo shrinkage. Hence, there could be thermal cracking in the pavement structure. The stress rate has a relationship with the (m) values; whenever it decreases, the relaxation in the stress rate also decreases. So, it can be concluded that for long-term pavement performance Low (S) values and Higher (m) values are required. Summarized Conclusions This research was conducted to assess the morphological, rheological and dynamic viscosity, and creep characteristics of waste HDPE and LDPE modified asphalt binders. The major conclusions drawn from this investigation are summarized as follows: • This study concluded that the induction of both the HDPE and LDPE decreased the phase angle (θ) which shows that the asphalt mortar is more resistant to permanent deformation. The high-temperature viscosity of the asphalt binder increased, and a marginal increase was noticed when 2.0% HDPE, and 4.0% LDPE were used in the mix. • The decreased viscosity facilitates the bituminous mix during mixing and compaction. This could be due to a decrease in sphere particles, which gives a micro bearing effect due to the reduction of sphere particles. On the other hand, the effect of spherical particles on the rutting parameter showed that the greater the rutting parameter would be the rutting susceptibility. The 4.0% LDPE with greater sphere particles and less viscosity than HDPE can be utilized for road construction. • The addition of 2% HDPE and 4% LDPE by weight of bitumen showed improvement in rutting resistance of 13.8% and 42.3%, respectively. These percentages also reported the maximum dynamic viscosity measurements, i.e., 21.80% at 145 • C. However, increasing the percentage of polymers can result in high viscosity and may have a mixing issue on plant-level modified bitumen preparation. • The trend of the graph for creep rate (m) was decreasing at −6 • C for LDPE but the trend of HDPE was according to the desired criterion. For the modified bitumen, the Creep stress (S) and its relationship with the temperature showed a decreasing trend, which shows relaxation in stress rate, also decreases. • The FTIR peaks showed the addition of HDPE and LDPE. The polymers modified bitumen by HDPE and LDPE showed the emergence of peaks. As compared to the HDPE, the LDPE is less apolar and has a less crystalline structure. Due to the high crystallinity found inside the polymers, there will be a sufficient amount of improvement in the blended mix of polymers with neat bitumen. • By the addition of 6% LDPE, the discontinuous matrix of LDPE is observed. However, it can be concluded that by increasing the percentage of LDPE from 6% the morphological improvement, along with phase dispersion of bitumen can be observed. • By the X-Rays diffraction, the degree of crystallinity for HDPE is more as compared to the LDPE but at a higher temperature in the wet mixing process the HDPE has a high viscosity and the mixing issues with the neat binder. • The above study recommended the use of waste plastics obtained from waste plastic bottles and bags for the under-construction asphalt pavements in order to improve the bearing capacity of the wearing surface, especially the under-construction projects all over the world, like the China-Pakistan Economic Corridor (CPEC). • For the high viscosity issues and mixing problem using a wet mixing process, this research concluded that at a high temperature the waste LDPE proved to be the better modifier as compared to the HDPE, but there is a still a need for further studies on microstructural, morphological and dynamic viscosity of HDPE and LDPE modified binders.
PCACE: A Statistical Approach to Ranking Neurons for CNN Interpretability In this paper we introduce a new problem within the growing literature of interpretability for convolution neural networks (CNNs). While previous work has focused on the question of how to visually interpret CNNs, we ask what it is that we care to interpret, that is, which layers and neurons are worth our attention? Due to the vast size of modern deep learning network architectures, automated, quantitative methods are needed to rank the relative importance of neurons so as to provide an answer to this question. We present a new statistical method for ranking the hidden neurons in any convolutional layer of a network. We define importance as the maximal correlation between the activation maps and the class score. We provide different ways in which this method can be used for visualization purposes with MNIST and ImageNet, and show a real-world application of our method to air pollution prediction with street-level images. INTRODUCTION In recent years, the use of convolutional neural networks (CNNs) has become widespread due to their success at performing tasks such as image classification or speech recognition. CNNs have achieved outstanding results at the ImageNet challenge and popularized the use of such architectures [25]. Remarkably, these same networks that outperform at the ImageNet challenge are also successful when used for other tasks such as object detection on the PASCAL VOC dataset [28]. Large training datasets, powerful GPUs and the implementation of regularization techniques such as dropout or regularization have also helped boost the performance of CNN architectures [27]. While CNNs continue to excel at countless tasks and competitions, their internal procedures remain a mystery and there is very little insight into how these architectures achieve such outstanding results. We effectively treat these neural networks as black boxes Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. Responsible AI '21, August 2021, KDD Virtual Conference © 2021 Association for Computing Machinery. ACM ISBN 978-1-4503-XXXX-X/18/06. . . $15.00 and it is extremely challenging to understand how they operate, due to their complexity and large number of interacting parts. However complex these black boxes might be, it is vital to acquire a deeper understanding of how they work. There are several reasons for encouraging research in this area. From a scientific perspective, understanding how deep neural networks operate is interesting on its own, but mastering their inner mechanisms will also allow us to improve their results and accuracy. Without any insight on how these black boxes work, their development into better models can only be achieved by trial-and-error. From a social perspective, we should not be allowing a system that applies untransparent and unexplainable algorithms to make decisions that govern health care, banking, or politics. Once the decisions taken by deep neural networks shift from innocuously classifying hand-written digits to deciding whether someone has a particular disease or is eligible for a bank loan, it becomes imperative to advocate for a right to explanation [11]. This paper focuses on the following question: how can we rank the hidden units of a convolutional layer in order of importance towards the final classification? 1 While research around understanding the inner mechanisms of CNN architecture advances, visualization methods that are targeted directly to each neuron (e.g., feature visualization) are unfeasible to apply and analyze to all of the thousands of neurons in the architecture. Therefore, our goal is to provide a method that identifies the neurons that contribute the most to the final classification. On top of it, any other visualization or explainability method that applies to a particular neuron can be studied only on this small set of ranked neurons or combinations of them, hereby making it practical. We discuss this question thoroughly and propose a novel statistical method called PCACE that ranks the hidden units of a convolutional layer according to their relevance towards the final (class) score. The algorithm is based on the Alternating Conditional Expectation algorithm by Breiman and Friedman [4], which provides the optimal transformations that allow us to maximize the correlation between the activation maps of the neurons with the final (class) score. We show how to use our statistical method for visualization purposes by using activation maps, CAM, and the activation maximization method. The combination of the statistical algorithm behind PCACE with several visualization methods yields a new procedure to help in the interpretability and explainability of convolutional neural networks. Besides testing our algorithm on the well-known datasets of MNIST and ImageNet, we also provide a real-world use case to air pollution prediction of street-level images. In the case of ImageNet, we replicate our results with both the ResNet-18 and VGG-16 architectures and compare them. We analyze the general features shown by our PCACE algorithm and consider the differences between gradient-based and statistical-based interpretability methods. RELATED WORK ON INTERPRETABILITY AND EXPLAINABILITY OF NEURAL NETWORKS After the rapid success of deep neural networks, the community has recently started to acknowledge that we have very limited understanding of how these architectures work and how they are able to achieve such remarkable results. Some of the first visualization tools that were proposed were saliency maps [22], which indicate the areas of the input image that are discriminative and most important with respect to the given class by using the intensity of the pixels. That is, given an input image, the saliency method ranks its pixels based on their influence on the final class score by using derivatives and backpropagation. A similar idea is the Grad-CAM method [21], which employs the gradients of any target concept to produce a localization map that highlights the important regions in the image that predict the concept. Grad-CAM can also be combined with existing pixel-space visualizations (Guided Grad-CAM) to achieve visualizations that are both high-resolution and class-discriminative. Another way to study the importance of the pixels in the input image is to perturb the image by occluding patches of the image and see how the classification score drops [27]. Other authors follow a similar idea by directly deleting some parts of the image in order to find the part of the image that makes the final class score drop the most [8]. A bit differently, Koh and Liang use influence functions to gain understanding on which training points are more relevant to the final classification without having to retrain the network [16]. Their method essentially shows how the model parameters change as we upweight a training point by an infinitesimal amount. In [27], the authors try to understand CNNs backwards by creating a Deconvolutional Network. A DeConvNet can be thought as a ConvNet model with the same components but in reverse, so that it maps features to pixels. A similar idea is followed in [6], where image representations are studied by inverting them with an up-convolutional neural network. Other work focuses on developing human-friendly concepts to help understand the machine. In [13], Concept Activation Vectors are introduced to provide an interpretation of the internal state of a network in human-friendly concepts, and Deep Dream and Lucid are Google projects that intend to humanize what the hidden layers and neurons see in the input images [19]. Their paper develops human-like visualizations for understanding what each neuron is focusing on. In the conclusion, the authors point out that one of the issues that still stands out in network interpretability is finding which units are most meaningful for understanding neural net activations, which is what we study in this paper. Other studies also engage in trying to bridge human concepts and neural networks; for example, [28] investigates how transferable are features in deep neural networks by differentiating between general and specific features learnt by the architecture. It was asked in [1] if CNNs learn class hierarchy, and [2] contains a study of the semantic concepts learnt by the units, such as colors, scenes, and textures. Other papers bring up criticism to some of the methods we just described. In [14], it is argued that saliency methods lack reliability when the explanation is sensitive to factors that do not contribute to the model prediction, and in [15] it is shown that DeConvNets and Guided Backpropagation do not produce the theoretically correct explanations for a linear model, and so even less for a multi-layer network with millions of parameters. Finally, in [9] and [18], the authors propose that neurons do not encode single concepts and that they are in fact multifaceted, with some concepts being encoded by a group of neurons rather than by a sole neuron by itself. As summarized in this section, most papers in the literature focus on qualitatively studying the specific features and concepts that are being learnt in the network, rather than quantifying the importance of each hidden neuron towards the final class score. Our method bridges this gap by focusing on quantifying the relevance of each neuron, beyond providing qualitative information based on visualization methods. An exception in the literature is the recent paper [3], where the most relevant units for each class are defined by computing which cause the most accuracy loss when removed individually. However, this approach is too computationally costly. Class Activation Mapping It was first observed in [28] that CNNs behave as object detectors even though no location information about the central object is provided. In this paper, we provide more evidence supporting this claim. In [29] they combine a simple modification of the global average pooling layer with the class activation mapping (CAM) technique to allow for a classification-trained CNN to also be able to localize specific image regions in a single forward-pass. Examples are provided in Sections 5 and 7, where we compare the activation maps of different neurons with the CAM visualization of the same input image. Activation Maximization The activation maximization method [7], instead of highlighting discriminative regions of the input image (as it is the case of saliency maps and CAM), synthesizes an artificial image * (which we will henceforth call ideal image) that maximizes the activation of a target neuron [20]: where Θ denotes the network parameter sets; i.e., the weights and the bias. This is achieved through an iterative process: after initially setting a random image 0 , the gradients with respect to 0 , that is, , / , are computed with backpropagation. Each pixel of the initial noisy image is changed iteratively to maximize the activation of the neuron, applying the update where denotes the gradient ascent step size. The final image represents the preferred input for that neuron. Note that the activation maximization method allows to both create the ideal image that maximally activates a neuron (Section 6), as well as the ideal image for a particular class ( Figure 1). Moreover, the activation maximization method uses the unnormalized final class score immediately before the application of the softmax function, to prevent the values from being squeezed between 0 and 1. We will follow the same principle for our PCACE algorithm. In order to make the ideal image more interpretable, the activation maximization method is normally used with regularization methods, such as ℓ 2 decay or Gaussian blur [20]. STATISTICAL METHODS FOR INTERPRETABILITY 3.1 Alternating Conditional Expectation This section introduces a novel statistical method to rank the hidden units of any neural network in order of importance towards the final class score: PCACE. The name PCACE results from the fusion of PCA (Principal Component Analysis), which we use for dimensionality reduction, and ACE (Alternating Condition Expectation), which we use to compute the maximal correlation coefficient between a dependent variable (the final class score) and multiple independent variables (each entry of the activation matrix of a particular neuron). The units will be ranked on the basis of the strength of their possibly non-linear relationship with the final class score. The Alternating Conditional Expectation (ACE) algorithm [4] is a non-parametric approach for estimating the transformations that lead to the maximal multiple correlation of a response and a set of independent variables in regression and correlation analysis. The iterative algorithm works by estimating optimal transformations in a multiple regression setup, to find the maximal correlation between multiple independent variables and a dependent variable . These transformations minimize the unexplained variance of a linear relationship between the transformed response variable and the sum of the transformed predictor variables [24]. Moreover, ACE does not require any assumptions on the response or predictor variables and is entirely automatic, in contrast to more recent methods for nonparametric dependence testing (e.g., kernel methods like the Hilbert-Schmidt Independence Criterion [10] require specifying a kernel and its lengthscale). ACE also contrasts with Generalised Additive Models in that it transforms the response variable. As summarized in [24], the general regression model for independent variables (predictors) 1 , 2 , . . . , and a response variable is given by where 0 , . . . , are regression coefficients that are to be estimated, and is the error term. However, this model is not well-suited for CNN architectures, since the activation functions in the network are not linear, and so instead we must use a non-linear regression method. The ACE regression model has the form where Θ( ) is a function of the response variable with zero mean and unit variance, and ( ) are zero-mean functions of the predictors , for = 1, . . . , . Then ACE finds the functions that minimize the error variance not explained in the regression, i.e., with respect to Θ and 1 , . . . , . In our case, given a fixed neuron in a convolutional layer in the architecture, the independent variables correspond to the entries of the activation map produced from the convolution between the fixed weights of the neuron and the input image. The predictor variable corresponds to the final class score of the input image before the application of the softmax function, to avoid the compression of the values into the interval [0, 1]. Since ACE will assign a number between −1 and 1 to each hidden unit, it allows to rank all of the hidden units of the network in a reliable, normalized, and clear way. The higher the ACE value is, the stronger is the relationship between that neuron and the score of a particular class. Applying ACE to CNN Architectures We now give a more detailed explanation of how to apply ACE to the CNN setting. Given a trained CNN architecture, we fix one of the convolutional layers, and our goal is to produce a ranking of the neurons that form that layer. 2 Each of these neurons has a fixed matrices of weights, which has size corresponding to the kernel of the convolutional layer. This matrix of weights produces an activation map whenever an input image is fed into the network and the neuron is then fired by its activation function. We fix a neuron in this convolutional layer, and we fix a set of input images which will be used to compute the maximal correlation between neuron and the final class score of this set of images. It can be meaningful to either have this set correspond entirely to training images, entirely to unseen images by the network, or a mix of both. It is also recommended to make this set of pictures be all of the same class, so that we are correlating a neuron with its importance towards the classification of a particular class (for example, in MNIST, we can create a ranking of the neurons for each of the 10 digits separately). However, we note that our algorithm PCACE does not require to be class specific (as we use it in Sections 4 and 5), and can also be applied in regression tasks (as we do in Section 7 for spatiotemporal data). Even for classification tasks, it can be interesting to use a set of input images that consists of a weighted mix of all classes, in order to produce a ranking of the overall most correlated neurons. Each of these neurons produces an activation map when an input image is fed into the network, which is a matrix of size 1 × 2 . ACE will then be working with 1 · 2 independent variables , where each corresponds to one of the entries in the activation maximization map. We favor this method instead of simpler ones (e.g., those consisting of only taking the mean or the maximum value of the 1 · 2 activation values in order to have just one independent variable), which neglect the complexity and range of values within the activation map. The response variable is given by the final correct class score right before applying the softmax function. For each pass of one of the input images of the set, we store the 1 · 2 activation maximization values as a column in a matrix , and the final class score in a vector . At this stage, we cannot input and directly into the ACE algorithm. Firstly, the dimensions of both the matrix and the vector are normally too large for ACE to be able to compute the maximal correlation coefficient in a reasonable computation time. Secondly, the ACE algorithm halts and outputs nan if values in the matrix are too small due to division by 0. One way to fix this is to standardize each row of the matrix (i.e., standardize each of the 1 · 2 predictor variable vectors): center to the mean and component-wise scale to unit variance. However, the issue might still not be resolved if the standard deviation of any of the 1 · 2 predictor variables is 0. Both problems can be solved at once by applying the PCA (Principal Component Analysis) algorithm before inputting and into ACE. PCA performs a linear mapping of the data in the original matrix to a lower-dimensional space such that the variance of the data in the low-dimensional representation is maximized by using eigenvalue decomposition [12]. After applying PCA, the new matrix ′ will have smaller dimensions but will have inherited the maximum possible variance of the original data. Once the dimension of the original matrix has been reduced, we can now apply ACE in a computationally efficient way. It is important to remark that when using PCA for the purposes of PCACE one needs to reduce the number of rows (the number of predictor variables), and not the number of columns (the number of input images). We take the absolute value of the final PCACE value, as we are concerned with the magnitude of the correlation. In fact, very few channels in our experiments presented a negative ACE correlation. The PCACE Algorithm PCACE is an algorithm that can be effectively used to rank hidden channels for the following reasons: (1) It bridges together two effective, reliable, and powerful statistical methods: the Alternating Conditional Expectation (ACE) and the Principal Component Analysis (PCA). (2) By using PCA on top of ACE, we can significantly reduce the dimensions of the matrix , therefore making the ranking computation time efficient. (3) Since the number of predictor variables is significantly reduced, we can increment the number of input images and PCACE will still be computationally efficient. (4) ACE will not encounter nan problems because PCA standardizes the original data. (5) The output PCACE values are all between 0 and 1, which yields a standardized method to make comparisons across different neurons and from different layers. AN EXAMPLE WITH MNIST We first provide an example of applying the PCACE algorithm to the MNIST dataset to produce a ranking of the neurons in the first convolutional layer for each of the 10 digits. The size of the activation matrix of the hidden units of the first convolutional layer in our architecture is 26 × 26, which means there are 676 predictor variables . For each input image, we record the activation value that each of the elements in the activation matrix achieves, store it, and record the final class score. After feeding input images into the network, we obtain 676 vectors of length for the predictor variables, and one vector of length for the response variable. We use 500 input images of each particular digit from the training dataset. After repeating this procedure with all the neurons in the layer for each of the 10 digits, we obtain the final PCACE rankings. We observe that many of the channels that PCACE ranks as the most important for each digit are precisely the ones that focus on the surroundings and details of the digit instead of the global shape, which we call reverse channels. The reverse channels for MNIST are numerically characterized by having a negative weight mean. They appear to activate at the edges, corners, and surroundings of the digit when the input digit is a real image from the training set (or a mean of the images in the training set), but are nearly dead (i.e., activation close to 0) when the input image is the ideal class digit created with the activation maximization method. This indicates a potential shortcoming of the activation maximization method: the artificial image produced by the activation maximization does not activate most of the channels that are ranked by the PCACE algorithm as the most important for the correct classification of the digit. These channels do not focus on the main class object (the digit), and therefore are not fired because the pixels that they identify are not maximized by backpropagation. More broadly, we aim to point out that interpretability methods that are gradient-based (such as CAM or saliency maps) do not necessarily correlate well with those that are statistics-based (such as the correlation method of PCACE). As we further develop in Sections 5 and 6, the channels in CNN architectures encode more information than that explicitly related to the class object, and this complexity is oftentimes lost with gradient-based methods. IMAGENET RESULTS We try our PCACE algorithm with the more complex ImageNet (1,000) dataset, on both the architectures ResNet-18 and VGG-16. With ResNet-18 we are able to use CAM, and thus we can compare the CAM visualizations with the activation maps of the top PCACE channels. While it is not possible to obtain CAM visualizations with VGG-16 [29], the first convolutional layers preserve the size of the input image, which allows for a clearer visualization of the activation maps (112 × 112 for ResNet-18 as opposed to 224 × 224 for VGG-16). We apply the PCACE algorithm on all convolutional layers of both architectures for several ImageNet classes, using 300 input images for each class (randomly selected from the ImageNet dataset), and using PCA to reduce by half the size of the matrix that we input to the ACE algorithm. The algorithm yields classspecific rankings of the neurons in each convolutional layer (ranked independently in each layer). We then plot the activation maps of the top PCACE channels (i.e., those that show the highest correlations) and compare them to the CAM visualization of [29]. Examples are shown in Figure 2. We support the claim in [3,28] that object detectors emerge in CNNs despite having no supervised training. Not only so, but we observe that the different channels are detectors for different kinds of objects in the input image, and that they are strongly consistent across multiple input images. For example, as shown in Figure 3, channel 6 in the first convolutional layer of ResNet 18 always detects (i.e., has higher activation values) the object immediately next to the class object (which in this case is Egyptian cat). Moreover, channel 6 is the top 2 highest-ranked PCACE channel, whereas channel 24 (which consistently focuses on the main class object) ranks last in the PCACE algorithm. The magnitudes of the activations are also consistent in each channel. We visualize different PCACE-ranked channels across the figures in the paper for breadth purposes. Not only does this show the locative power of CNNs despite being trained on only labels, but we also observe that the most correlated channels tend to be those that do not target the main class object, and focus on different objects in the images or the edges of the shapes in the images. Figure 3 shows how some of the top 5 PCACE channels consistently focus on objects that are not the class object or the edges of the image shape, whereas they tend to not highlight the main class object. There are also instances when this is not the case and some of the top PCACE channels focus on the main class object, but they are scarce. We repeated the same visualizations with VGG-16 and obtained the same observations, which are shown in Figure 4 (note that VGG-16 does not allow for CAM visualizations, which is why they do not appear in Figure 4). Therefore, these findings are not specific to the ResNet-18 architecture, and we hypothesize that they extend to other architectures as well. These observations are coherent with the findings in [17], where they argue that different neural networks learn the same representations. Finally, Figure 5 shows sorted PCACE values for all the convolutional layers in the VGG-16 architecture for the ImageNet class Egyptian cat in order to analyze how the correlation values vary across the different convolutional layers. The PCACE values follow similar trends across the different layers, and they are almost superposed. Deeper layers tend to have lower PCACE values at the smaller end, but higher correlations on the larger end (e.g., the highest PCACE value in the first Conv layer does not exceed 0.32, whereas the highest PCACE value in Features 28 exceeds 0.65). However, Figure 5 shows that the count distribution of the PCACE values within the same block still differs across layers. As with MNIST, we find no relevant correlation between PCACE rankings of ImageNet classes, hereby indicating that a global ranking cannot be directly deduced from the class-based PCACE ranking. We repeated the same experiment with the activation maximization method as we did for MNIST: we feed into the network the ideal image generated through activation maximization for each class and record the activations of each channel. We do not find any correlation between PCACE values and the maximum or mean activation of each channel, which again indicates some discrepancy between gradient-based and correlation-based methods. FILTER VISUALIZATION WITH ACTIVATION MAXIMIZATION As explained in Section 5, it becomes harder to visualize activation maps of the deeper convolutional layers as their size reduces. Therefore, we turn to the technique of activation maximization in order to analyze the feature visualizations of the different channels across multiple convolutional layers. We produce the ideal synthesized image for each channel in all convolutional layers of VGG-16 using the library. tf-keras-vis 3 We minimize in the ℓ 2 norm using loss steps of size 50. In Figure 7 we show the filter visualization of the top 5 PCACE channels for the class Egyptian cat for each convolutional layer in VGG-16. Note that the ideal image of a channel is independent of the class, but the PCACE ranking is not. Like in previous situations, showing the feature visualizations for all channels would have required 4,224 images. With the PCACE algorithm, we can meaningfully decide which subset to visualize without this choice being arbitrary. Figure 7 also shows how the channels are sequentially encoding more complex information as we inquire deeper into the network. USE CASE: APPLICATION TO STREET-LEVEL IMAGES FOR AIR POLLUTION DETECTION Lastly, we show the results of the PCACE algorithm in a real-word application where the aim is to predict air pollution levels from street-level images. We use weights from a model trained by by [23] using a slightly modified ResNet-18 architecture where the outputs are continuous pollutant levels. In this paper, we use the weights trained using a subset of images from the city of London for predicting annual NO 2 levels. We include this application for two reasons: first, to present the uses of PCACE in datasets other than the typical uses of MNIST and ImageNet. Secondly, Sections 4 and 5 are centered around classification tasks. In this section, the task to predict air pollution from images corresponds to a regression task, where there is only one final score instead of all the class scores. Still, since in this paper we have analyzed PCACE from a class-based perspective, we apply the PCACE algorithm to 300 input images from most polluted areas in the city of London (i.e., top decile). For each of these 300 images, both the true and predicted NO 2 values are above 84 g/m 3 . This prompts the following question: what are the channels detecting in high-pollution images? In Figure 8 we present some examples of the original input street-level image, its CAM visualization, and the activation maps of the bottom 2 and top 2 PCACE channels. Firstly, we show that the channels continue to act as object detectors despite having trained the architecture with a regression task instead of a classification one. Since in this setting we do not have a main class object as point of reference, understanding what the neurons are focusing on becomes a more open-ended question. We find that while some channels continue to focus on the edges of the image, many act as object detectors for buildings and, more remarkably, trees. As it was the case for MNIST and ImageNet, the activation maps of a fixed channel across different input images are consistent. For example, as can be seen in Figure 8, channel 42 consistently activates at the edges of the image whereas channel 45 consistently detects the trees in the input image. In particular, channel 45, which is the second highest ranked PCACE channel, is a surprisingly powerful tree detector (see Figure 9), even when they appear as very small parts of the image, as it is the remarkable case of the top right image in Figure 9. More surprisingly, there is no class tree in ImageNet 1,000, which provides more evidence for the strong object detection capabilities of current CNN architectures. Moreover, we observe that the bottom PCACE channels are not doing tree detection, as shown in 8. This further indicates how the PCACE ranking can help in interpreting how the network is encoding the key concepts and identifying certain classes. When we compare the activation maps of the PCACE channels to the CAM visualizations of the same street-level images of [23], as we exemplify in Figure 8, we observe that the CAM method tends to highlight the pixels that correspond to parts of the road, which likely indicate higher levels of pollution. As we argued, the activation maps act as object detectors, and therefore in the case of regression tasks we recommend leveraging CAM methods (which can highlight particular areas of the image beyond specific objects) with the PCACE activation maps. However, the PCACE method alone was able to rank highest the neurons that do tree object detection, and it is sensible to conclude that trees in cities are most predictive for pollutant methods, thus indicating the effectiviness of the PCACE ranking in this task. CONCLUSIONS AND DISCUSSION In this paper we have presented the PCACE algorithm: a new statistical method that combines the Principal Component Analysis for dimensionality reduction with the Alternating Conditional Expectation algorithm to find the maximal correlation coefficient between a hidden neuron and the final class score. PCACE returns a number between 0 and 1 that indicates the strength of the non-linear relationship between the multiple predictor variables (each element in the activation matrix of a particular channel when feeding multiple input images into the network) and the response variable (each correct final class score before the softmax function when feeding multiple input images into the network). We then use the different PCACE values to rank the hidden units in order of importance. PCACE yields a rigorous statistical and, most importantly, standardized method which is able to quantify the relevance of each neuron towards classification. Thus, PCACE constitutes a useful tool to analyze deep learning models trained on spatiotemporal data, as shown by the top ranking of tree detector channels on our street-level air pollution imagery. We have tested our algorithm in two well-known datasets for classification tasks, MNIST and ImageNet. We have also used it in a real-world application to street-level images for detecting air pollution, which also shows how we can use PCACE in regression tasks beyond classification ones. We have provided extensive evidence that the channels in CNN architectures act as object detectors, and not only for the main class object in the case of classification tasks, but also for secondary objects in the images as well. Moreover, most of the top PCACE channels tend to be those that detect these secondary objects or corners and edges of the input image. We have shown that the type of object detection ability is consistent for each particular channel independent of the input image (e.g., edges, small set of pixels, secondary object). Moreover, these object detection capabilities are preserved in the case of regression tasks, as we have shown through the strong tree detection properties of the top PCACE channels in the street-level air pollution images. We have also observed that the top ranked PCACE channels tend to not highlight the main object in their activation class, and instead focus on secondary objects of the image or on the edges of the main shapes. In the case of MNIST, we have shown how when the ideal image synthesized by the activation maximization method is fed into the architecture, the top ranked PCACE channels tend to present an activation close to 0. In the case of ImageNet, there is also no correlation between the mean or the maximum activation value in the activation map of each channel and its PCACE value. This indicates a discrepancy between gradient-based methods and correlation-based methods, and we thus recommend combining different interpretability methods that rely on both correlations and optimization methods, as they tend to provide complementary information. The PCACE values for each convolutional layer can be used in several ways. Firstly, one can study the correlation values directly, as we did in Figures 5 and 6. We showed that the PCACE values across the different convolutional layers in an architecture are coherent and follow the same trend within each block, with deeper layers acquiring higher correlation values. We believe that the coherence of the PCACE values across different convolutional layers justifies using the algorithm to compare channels between different layers. In this paper, we have always ranked the neurons within the same convolutional layer, but for future work it would be interesting to merge the PCACE values of all the neurons in the architecture given the standardized properties of PCACE. Secondly, we can use the PCACE rankings for visualization purposes. In this paper we have focused on producing the activation maps of the top PCACE channels, as well as the synthesized image produced by the activation maximization method when optimizing for each of the channels separately. The activation maps are meant to be interpreted with respect to an input image, and hence are better for showing the object detection properties of CNNs. On the other hand, the activation maximization method allows us to perform a feature visualization study of the top PCACE channels, as shown in Figure 7. We believe that many other visualization and interpretability methods which are channel-based can be added on top of PCACE with the goal of interpreting CNN models trained on spatiotemporal data. A pressing issue in the current research work on explainability is the infeasibility of visualizing all of the neurons in an architecture, which range in the thousands. Previous works needed to make an arbitrary choice of which neurons to illustrate when presenting an interpretability method. The PCACE algorithm now allows to make this choice in a rigorous manner. Other directions for future work include pruning on top of PCACE to further understand the impact of the top-ranked neurons. It would also be necessary to compare the PCACE rankings to other quantifications of the definition of importance of a channel, as it was recently done in [3]. However, there is a scarcity of work in this direction, and thus a lack of standardized methods to compare to. We hope that our algorithm provides a step towards a quantifiable notion of explainability in the deep learning field. Lastly, we could consider to perform a non-class-based PCACE, and run the algorithm with a weighted set of images from all classes. We could also aim to incorporate the possible correlations between pairs of neurons, or even consider computing the PCACE with groups of neurons as predictor variables, as they have also been found to work in groups ( [9], [18]). There is still much to be done to open up these black boxes. SUPPLEMENTARY MATERIAL We provide pseudocode for the PCACE algorithm in Algorithm 1. For reproducibility purposes, the relevant code and data can be found at https://github.com/silviacasac/ranking-CNN-neurons. Algorithm 1 PCACE Algorithm. 1: For each channel in the conv layer do: 2: For each input image do: 3: Store the activations of channel into matrix , . 4: Store the final class score into vector , . 5: Apply PCA to reduce the dimensionality of matrix , . 9: Let be the error tolerance. 16: Return the absolute value of the Pearson product-moment 17: correlation coefficients. 18: Sort the final values.
Performance Evaluation of a Direct Absorption Collector for Solar Thermal Energy Conversion : The solar absorption e ffi ciency of water as a base-fluid can be significantly improved by suspending nanoparticles of various materials in it. This experimental work presents the photo thermal performance of water-based nano-fluids of graphene oxide (GO), zinc oxide (ZnO), copper oxide (CuO), and their hybrids under natural solar flux for the first time. Nanofluid samples were prepared by the two-step method and the photothermal performance of these nanofluid samples was conducted under natural solar flux in a particle concentration range from 0.0004 wt % to 0.0012 wt %. The photothermal e ffi ciency of water-based 0.0012 wt % GO nanofluid was 46.6% greater than that of the other nanofluids used. This increased photothermal performance of GO nanofluid was associated with its good stability, high absorptivity, and high thermal conductivity. Thus, pure graphene oxide (GO) based nanofluid is a potential candidate for direct absorption solar collection to be used in di ff erent solar thermal energy conversion applications. time of The maximum mass loss was observed for GO nanofluid and minimum for DI It can be observed during the experimentation the mass loss of all types of nanofluids was enhanced with the enhancement in the concentration of the nanoparticles. Maximum mass loss was achieved in GO nanofluid compared with other nanofluids used in because of the high solar absorptivity. The average mass loss of binary nanofluids increased first and then reduced with the increasing CuO and ZnO concentration, which is higher than that of individual CuO and ZnO nanofluid as in and respectively. The addition of and ZnO nanoparticles in nanoparticles reduced the mixture performance at a CuO and ZnO mass fraction, but greater than that of DI water. Figure shows the mass loss for a specific time duration in the nanofluid of CuO, ZnO, and GO under natural sun over the period of 1800 s of DI water base nanofluids at 0.0004 wt % concentration. Introduction Solar energy is the most abundant renewable source, and can be used for space heating [1], electricity generation, desalination, and many other similar applications [2]. Owing to the increase of the population, the depleting trend of non-renewable energy resources must be taken into consideration [3]. Efficient energy use is a difficult task [4]. The existing literature describes different type of collectors that rely upon surface-based solar absorption and heat transfer, which is engaged by collectors to a Experimental Setup Nanoparticles of three different materials (CuO, ZnO, and GO) were used in this experimental investigation. The experimental setup consisted of a pyranometer, direct solar absorber, digital scale, K-type thermocouples, and data acquisition system or data logger. The solar radiations were absorbed by the nanofluid present in the sample container. The pyranometer interfaced with the computer through a data logger. As a result of evaporation, weight losses occur, which were measured using a digital scale. K-type thermocouples were used to measure the average temperature of nanofluid. Thermocouples were calibrated using a NIST (National Institute of Standards and Technology) traceable precision glass thermometer with ±0.01 • C and the uncertainty in temperature measurement was ±0.25 • C. The data logger was used to record the data over the time of the experiment. A simplified experimental setup scheme is shown in Figure 1. Deionized water (IPEX, Inc., Oakville, ON, Canada) was used as a base fluid to prepare nanofluid samples. A petri dish was used as a sample container to hold nanofluid samples. It was so deep that three different thermocouples at three different heights were installed to take the average temperature. It helped in the direct absorption of solar irradiation. It was a cylindrical container with a diameter and height of 7 cm each. Before the start of each experiment, 250 g of water was filled in this petri dish, with different concentrations of nanoparticles. The petri dish was placed on a digital scale for measuring weight loss in the fluids. Change in bulk fluid temperature was determined by three K-type thermocouples (OEM WRR2-130), which were placed at three typical depths; just at the top surface of the nanofluid, in the middle, and close to the bottom of the petri dish. Ambient temperature was measured with the help of a fourth k-type thermocouple. Light irradiance was measured with a pyranometer (SR30-D1). All data were registered to the computer via a data logger (CX402-XXM). Experimental Setup Nanoparticles of three different materials (CuO, ZnO, and GO) were used in this experimental investigation. The experimental setup consisted of a pyranometer, direct solar absorber, digital scale, K-type thermocouples, and data acquisition system or data logger. The solar radiations were absorbed by the nanofluid present in the sample container. The pyranometer interfaced with the computer through a data logger. As a result of evaporation, weight losses occur, which were measured using a digital scale. K-type thermocouples were used to measure the average temperature of nanofluid. Thermocouples were calibrated using a NIST (National Institute of Standards and Technology) traceable precision glass thermometer with ±0.01 °C and the uncertainty in temperature measurement was ±0.25 °C. The data logger was used to record the data over the time of the experiment. A simplified experimental setup scheme is shown in Figure 1. Deionized water (IPEX, Inc. Oakville ON, Canada) was used as a base fluid to prepare nanofluid samples. A petri dish was used as a sample container to hold nanofluid samples. It was so deep that three different thermocouples at three different heights were installed to take the average temperature. It helped in the direct absorption of solar irradiation. It was a cylindrical container with a diameter and height of 7 cm each. Before the start of each experiment, 250 g of water was filled in this petri dish, with different concentrations of nanoparticles. The petri dish was placed on a digital scale for measuring weight loss in the fluids. Change in bulk fluid temperature was determined by three K-type thermocouples (OEM WRR2-130), which were placed at three typical depths; just at the top surface of the nanofluid, in the middle, and close to the bottom of the petri dish. Ambient temperature was measured with the help of a fourth k-type thermocouple. Light irradiance was measured with a pyranometer (SR30-D1). All data were registered to the computer via a data logger (CX402-XXM). Nanofluids' Preparation Water nanofluids of ZnO, CuO, GO, and their hybrids were prepared by the two-step method. For example, in the preparation of copper oxide nanofluid, sodium hydroxide (NaOH) and copper chloride dehydrate (CuCl 2 :2H 2 O) were used as chemical reagents for synthesis. Copper chloride and sodium hydroxide were used according to a molar ratio (CuCl 2 :NaOH) of 0.5:1 to prepare the solution sample. The amount of CuCl 2 as per the molar ratio was dissolved in 100 ml of deionized water. Precipitating agent (NaOH) was added dropwise in this solution under constant stirring. pH value of 12 was maintained throughout the reaction. NaOH was used as a precipitate agent to control the pH value. NaOH was added dropwise under constant stirring and the solution was mixed with precipitating agent (NaOH) to obtain less chemically dispersed nanoparticles. There is no time limit to control the pH value during the preparations of nanofluids. The precipitate settled down upon the completion of the reaction process. The sample was aged for 10 h afterwards. The precipitate was collected and washed a number of times with deionized water through filter paper to make it free from sodium and chloride ions. These precipitates were dried in an oven at 80 • C for 10 h. The precipitate was ground using a pestle and mortar to a clear black powder. The black powder was sintered at 600 • C in a muffle furnace for 3 h. After the completion of the reaction, the samples were aged for 10 h, because of the nanofluids are settled down on the surface of the beaker and in order to obtain less dispersed nanoparticles. Afterwards, samples were put in the muffle furnace for different times and some changes in the nanofluids were observed, that is, change in the crystal structure, size of the nanofluids, and length of the nanofluid. In this new work, a co-precipitation method was used to prepare nanoparticles, where NaOH was used as the precipitating agent and serves as a terminator for growing the nano-particles because the particles cannot come together easily. Furthermore, it expands during the calcination process, which is why the temperature was 80 • C for 10 h and 600 • C for 3 h. The same procedure was followed for the synthesis of other nanomaterials, that is, GO and ZnO [28]. To make a nanofluid sample in base fluid, a measured amount of nano powder was mixed with a given base fluid volume and the acquired mixture was sonicated for 30 min before starting the photothermal conversion experiment. Three different concentrations (0.0004 wt %, 0.0008 wt %, and 0.0012 wt %) of nanoparticles were used in this research. Nanopowders with a maximum concentration of 0.0012 wt % in the experimental range were suspended in the basefluid using ultrasonication and magnetic stirring. A visual method and absorption spectra were used to evaluate the nanoparticle sedimentation. No sedimentation was observed after suspending the nanoparticles in the base fluid until 30 minutes. Photothermal conversion experiment of the nanofluids was conducted in a particle concentration range from 0.0004 wt % to 0.0012 wt % for individual water-based nanofluids of GO, ZnO, and CuO. The hybrids (binary nanofluids) GO-CuO and GO-ZnO were analysed at 0.0004 wt % concentration under natural solar flux. Table 1 shows that experiments were carried out on 0.0004 wt % for individual nanofluids of GO, ZnO, and CuO and the hybrid combination of GO-CuO and GO-ZnO. The amount of GO is different, but weight concentrations of hybrid nanofluids are still same by varying the concentration of other nanoparticles used in this study. It was observed that adding a very small amount of nanoparticles in the base fluid enhanced the thermo physical properties of the base fluid and heat transfer rate. Photothermal efficiency was sensitive to concentration mixing ratio in the case of binary nanofluids GO-CuO and GO-ZnO, and extreme addition of an individual nanoparticle component ZnO or CuO lowered performance. Optical Absorbance In this study, a UV-vis spectrophotometer (Shimadzu 1800, Japan) was used for evaluating the dispersion stability of the prepared nanofluids. Incident scanning light is thrown on the sample in the wavelength range of 300 to 1000 nm. All data were recorded at room temperature (25 • C). The spectrophotometer works on the principle of Lambert-Beer's law (A = εcl), where A shows absorbance, ε shows the molar absorption coefficient, c shows the molar concentration, and l shows the optical path length. Two cuvettes of weight capacity of 0.0001 g each are used. One cuvette contains the reference base fluid, while other is filled with nanofluid. Incident light of equal intensity is thrown on both the cuvettes and difference in the light is measured by the detectors after passing through the solutions. Hence, a spectrum is obtained between the absorbance and wavelength, which gives the concentration of nanoparticles in nanofluids. After comparing this spectrum trend at a specific wavelength after defined time intervals, the concentration and thus stability of nanoparticles are noted. In UV-visible regions, different types of nanofluids at 0.0004 wt %, 0.0008 wt %, and 0.0012 wt % concentrations have different absorption peaks. The experimental results showed that different nano-fluids have different peaks of optical absorption over the UV-visible spectrum. The display of copper oxide, zinc oxide, and graphene oxide nanofluids was a broad shoulder around 275 nm, 314 nm and 287 nm, respectively, at three different weight concentrations. The intensity of absorption peaks increased with the rise in the concentration of the nanofluid. The graphene oxide nanofluid absorption peak is higher than that of all the other nanofluids used in this research compared at the same weight particle concentration of 0.0004 wt %, 0.0008 wt %, and 0.0012 wt %, respectively. This is because of the augmented performance of carbon-based nanofluids (GO), which have good stability, high absorptivity, and high thermal conductivity properties, which make them distinct from the others. It can fluoresce over a wide range of wavelengths (from near-infrared to ultraviolet) and effectually reduce the fluorescence of other fluorescent dyes. In contrast, minimum absorbance was observed in the case of deionized (DI) water because, in the visible light spectrum, water is a poor absorber of solar energy. Adding nanoparticles that have good absorptivity in the visible region can significantly enhance the solar absorption of water. Compared with graphene oxide, other nanofluids have their absorption peaks in the UV-visible region. Although nanofluids of ZnO and CuO do not have very strong absorption peaks in the UV region, their absorption is far better than water, which is shown below in Figure 2. In Figures 3 and 4, the optical properties of composite nanoparticles (GO-ZnO and GO-CuO) are evaluated in UV-vis spectroscopy [45,46]. The spectrum exhibits absorption in the visible and infrared regions, but prominent absorption occurs in the UV region. In both situations of GO-ZnO and GO-CuO composites binary nanoparticles, the absorption peaks of pure GO, pure ZnO, and pure An increasing linear trend in bulk fluid temperature can be seen by the reading of all three thermocouples at the start of the experiments. However, this increasing trend is not continued later. This is because, at the beginning of the experiment, most of the transferred heat energy results in a linear rise in temperature, and very little heat is wasted to the environment. As the nanofluid temperature increases, the temperature of the nanofluid sample and environment is also increased, which is the reason for the enhanced heat loss and linearity not being observed. The increase in temperature is very little when the value of temperature difference reaches the maximum value, and it is clear that, after 600 s, the linear trend finished and the non-linear trend started somehow during the experiment. Moreover, the temperature rises at a greater rate for TC1 than for that of the other two remaining thermocouples (TC2, and TC3). This is because of the decreasing tendency in solar light irradiance accomplished with the optical length. The highest temperature variation between the top and bottom surfaces of the nanofluid was 3.86 °C. Change in Fluid Volume Temperature The environmental temperature remained almost the same during the experiment throughout this study. If the comparison is made between the nanofluids and deionized water temperature, it can be realized that all nanofluids have greater temperature gradients than DI water (see Figure 5. It can also be seen that nanofluids' temperature increased with the increase in nanoparticles' concentration, as presented in Figures 6 and 7 The greatest increase in the average change in temperature (2.51 °C) was achieved in GO nanofluid compared with the other nanofluids used in this study because of high solar absorptivity. The specific capacity of nanofluids was observed to be very close to that of water. Moreover, the difference in the increase in temperature is due to the high An increasing linear trend in bulk fluid temperature can be seen by the reading of all three thermocouples at the start of the experiments. However, this increasing trend is not continued later. This is because, at the beginning of the experiment, most of the transferred heat energy results in a linear rise in temperature, and very little heat is wasted to the environment. As the nanofluid temperature increases, the temperature of the nanofluid sample and environment is also increased, which is the reason for the enhanced heat loss and linearity not being observed. The increase in temperature is very little when the value of temperature difference reaches the maximum value, and it is clear that, after 600 s, the linear trend finished and the non-linear trend started somehow during the experiment. Moreover, the temperature rises at a greater rate for TC 1 than for that of the other two remaining thermocouples (TC 2 and TC 3 ). This is because of the decreasing tendency in solar light irradiance accomplished with the optical length. The highest temperature variation between the top and bottom surfaces of the nanofluid was 3.86 • C. Change in Fluid Volume Temperature Energies 2020, 13, 4956 8 of 16 energy absorption. This increased the average change in temperature, which resulted in the enhancement in photo thermal efficiency as compared with individual nanofluids, except for pure GO nanofluid, because pure GO nanofluid is a carbon-based nanofluid having the highest solar absorptivity compared with other individual nanofluids (ZnO, CuO) and binary nanofluids GO-CuO and GO-ZnO used in this study. The environmental temperature remained almost the same during the experiment throughout this study. If the comparison is made between the nanofluids and deionized water temperature, it can be realized that all nanofluids have greater temperature gradients than DI water (see Figure 5). It can also be seen that nanofluids' temperature increased with the increase in nanoparticles' concentration, as presented in Figures 6 and 7. The greatest increase in the average change in temperature (2.51 • C) was achieved in GO nanofluid compared with the other nanofluids used in this study because of high solar absorptivity. The specific capacity of nanofluids was observed to be very close to that of water. Moreover, the difference in the increase in temperature is due to the high absorbance of graphene nanofluid (GO) in the visible region. GO is black and carbon-based nanofluid. Thanks to this characteristic, it shows augmented absorptivity of solar irradiation. The temperature variations for the 0.0004 wt % hybrid nanofluids GO-CuO and GO-ZnO and DI water are displayed in Figures 6 and 7 respectively. The average change in temperature increased initially and then decreased with the increasing copper oxide and zinc oxide concentration, which is still higher than that of individual ZnO and CuO nanofluids. The increase in the wt % of of CuO and ZnO nanoparticles in GO reduced the mixture performances even at lower fractions of of CuO and ZnO nanoparticles, but they were superior to that of DI water. Thus, it can be observed that the addition of GO in metal-oxides nanoparticles, that is, binary nanoparticles, enhanced the overall solar energy absorption. This increased the average change in temperature, which resulted in the enhancement in photo thermal efficiency as compared with individual nanofluids, except for pure GO nanofluid, because pure GO nanofluid is a carbon-based nanofluid having the highest solar absorptivity compared with other individual nanofluids (ZnO, CuO) and binary nanofluids GO-CuO and GO-ZnO used in this study. Mass Loss of Nanofluids A digital balance was used to calculate the reduction in mass of the nanofluids because of the evaporation phenomena. The time duration for this measurement was selected as 1800 s under natural sunlight. It is observed that a reduction in the mass of the nanofluids is directly related to the bulk temperature increment. Instead of heating bulk fluid, most of the absorbed energy is used to evaporate fluid. An increment in the temperature of the sample as a whole means that more heat loss occurs to the environment. Mass Loss of Nanofluids A digital balance was used to calculate the reduction in mass of the nanofluids because of the evaporation phenomena. The time duration for this measurement was selected as 1800 s under natural sunlight. It is observed that a reduction in the mass of the nanofluids is directly related to the bulk temperature increment. Instead of heating bulk fluid, most of the absorbed energy is used to evaporate fluid. An increment in the temperature of the sample as a whole means that more heat loss occurs to the environment. Mass Loss of Nanofluids A digital balance was used to calculate the reduction in mass of the nanofluids because of the evaporation phenomena. The time duration for this measurement was selected as 1800 s under natural sunlight. It is observed that a reduction in the mass of the nanofluids is directly related to the bulk temperature increment. Instead of heating bulk fluid, most of the absorbed energy is used to evaporate fluid. An increment in the temperature of the sample as a whole means that more heat loss occurs to the environment. GO nanofluid and minimum for DI water. It can be observed during the experimentation that the mass loss of all types of nanofluids was enhanced with the enhancement in the concentration of the nanoparticles. Maximum mass loss was achieved in GO nanofluid compared with other nanofluids used in this study because of the high solar absorptivity. The average mass loss of binary nanofluids increased first and then reduced with the increasing CuO and ZnO concentration, which is higher than that of individual CuO and ZnO nanofluid as seen in Figures 9 and 10 respectively. The addition of CuO and ZnO nanoparticles in GO nanoparticles reduced the mixture performance even at a low CuO and ZnO mass fraction, but greater than that of DI water. Figure 8 shows the mass loss for a specific time duration in the nanofluid of CuO, ZnO, and GO under natural sun over the period of 1800 s of DI water base nanofluids at 0.0004 wt % concentration. Energies 2020, 13, x FOR PEER REVIEW 10 of 17 sun light during the experiment for the time period of 30 min. The maximum mass loss was observed for GO nanofluid and minimum for DI water. It can be observed during the experimentation that the mass loss of all types of nanofluids was enhanced with the enhancement in the concentration of the nanoparticles. Maximum mass loss was achieved in GO nanofluid compared with other nanofluids used in this study because of the high solar absorptivity. The average mass loss of binary nanofluids increased first and then reduced with the increasing CuO and ZnO concentration, which is higher than that of individual CuO and ZnO nanofluid as seen in Figures 9 and 10 respectively. The addition of CuO and ZnO nanoparticles in GO nanoparticles reduced the mixture performance even at a low CuO and ZnO mass fraction, but greater than that of DI water. Figure 8 shows the mass loss for a specific time duration in the nanofluid of CuO, ZnO, and GO under natural sun over the period of 1800 s of DI water base nanofluids at 0.0004 wt % concentration. Photothermal Efficiency Photothermal efficiency (PTE) is the ratio between the internal energy increase of fluid to the total incident solar radiation [46]. where CW defines the specific heat capacity of water measured in (J/kgK); Lv is the latent heat of vaporization (J/kg) at a pressure of 1 bar; mloss is the mass loss (mloss = minitial/t) of nanofluid in time Photothermal Efficiency Photothermal efficiency (PTE) is the ratio between the internal energy increase of fluid to the total incident solar radiation [46]. where C W defines the specific heat capacity of water measured in (J/kgK); L v is the latent heat of vaporization (J/kg) at a pressure of 1 bar; m loss is the mass loss (mloss = minitial/t) of nanofluid in time t, measured in kg/s; I is the solar irradiance, which is equal to 850 W/m 2 , measured by the pyranometer; m w is the mass of water in (kg); A is the illumination area of the nanofluid sample, measured in m 2 ; and ∆T is the average change in temperature value in time t of three thermocouples. Equation (1) gives this energy analysis, which is translated in the form of photothermal efficiency. Figures 11 and 12 show the photothermal efficiency of water base (GO, ZnO/H 2 O, CuO) nanofluids and their composites (GO-ZnO and GO-CuO) at different wt % concentrations (0.0004 wt %, 0.0008 wt %, and 0.0012 wt %). It can be observed that the photothermal efficiency of all types of nanofluids is directly related to nanoparticle concentration, which is enhanced with the enhancement of the nanoparticle concentration. Pure GO nanofluid has the highest efficiency relative to other nanofluids used in this research because of its high absorptivity and high thermal conductivity compared with the others. Photothermal efficiency would be higher for nanoparticles of higher thermal conductivity. According to the Fourier law of heat conduction, the heat transfer rate depends on thermal conductivity, which is directly related to temperature. At a higher thermal conductivity, a higher temperature will be achieved, which is the main parameter in calculating the photothermal efficiency. Pure GO nanofluid has the highest efficiency relative to other nanofluids used in this research because of its high absorptivity and high thermal conductivity compared with the others. Photothermal efficiency would be higher for nanoparticles of higher thermal conductivity. According to the Fourier law of heat conduction, the heat transfer rate depends on thermal conductivity, which is directly related to temperature. At a higher thermal conductivity, a higher temperature will be achieved, which is the main parameter in calculating the photothermal efficiency. Specific Absorption Rate (SAR) The specific absorption rate is conventionally defined as energy absorbed per unit mass of nanoparticles. It is an important factor that is also used to calculate the photothermal efficiency of different types of nanofluids. It can be calculated in (kW/g) using Equation (1) [46]. In Equation (2), Cw and mw symbolise the specific heat capacity (J/kgK) and mass (kg) of base fluid (water), respectively. Lv is the latent heat of vaporization (J/kg) at a pressure of 1 bar. ΔTn and ΔTw show the change in nanofluid temperature and base fluid (water) in time Δt, respectively. Figures 13 and 14 show the specific absorption rate (SAR) of binary nanofluids (ZnO-GO and CuO-GO) and individual nanofluids (ZnO, CuO, and GO). Comparing all the nanofluids used, pure GO nanofluid has the maximum specific absorption rate (SAR) at different wt % concentration. Specific Absorption Rate (SAR) The specific absorption rate is conventionally defined as energy absorbed per unit mass of nanoparticles. It is an important factor that is also used to calculate the photothermal efficiency of different types of nanofluids. It can be calculated in (kW/g) using Equation (1) [46]. In Equation (2), C w and m w symbolise the specific heat capacity (J/kgK) and mass (kg) of base fluid (water), respectively. L v is the latent heat of vaporization (J/kg) at a pressure of 1 bar. ∆T n and ∆T w show the change in nanofluid temperature and base fluid (water) in time ∆t, respectively. Figures 13 and 14 show the specific absorption rate (SAR) of binary nanofluids (ZnO-GO and CuO-GO) and individual nanofluids (ZnO, CuO, and GO). Comparing all the nanofluids used, pure GO nanofluid has the maximum specific absorption rate (SAR) at different wt % concentration. Conclusions The collection of direct solar absorption-based nanofluids is an encouraging technique for solar power generation systems. Many studies on different types of nanoparticles for solar energy state that a comparison for photothermal performance characteristics of various nanofluids at the same experimental conditions is much needed. In direct absorption solar collectors (DASCs), the photothermal conversion efficiency of three nanomaterials (GO, ZnO, and CuO) and their composites (GO-CuO and GO-ZnO) is experimentally examined under natural sun. The contribution of optical absorption, changes in fluid volume temperature, and reduction in the mass of the nanofluids are disclosed in the perspective of their photothermal conversion efficiencies. Experimental outcomes describe that all nanofluids and their composites have higher solar energy absorption, higher temperature gain, and higher mass loss than base fluid (DI water), and GO nanofluid proved to be Conclusions The collection of direct solar absorption-based nanofluids is an encouraging technique for solar power generation systems. Many studies on different types of nanoparticles for solar energy state that a comparison for photothermal performance characteristics of various nanofluids at the same experimental conditions is much needed. In direct absorption solar collectors (DASCs), the photothermal conversion efficiency of three nanomaterials (GO, ZnO, and CuO) and their composites (GO-CuO and GO-ZnO) is experimentally examined under natural sun. The contribution of optical absorption, changes in fluid volume temperature, and reduction in the mass of the nanofluids are disclosed in the perspective of their photothermal conversion efficiencies. Experimental outcomes describe that all nanofluids and their composites have higher solar energy absorption, higher temperature gain, and higher mass loss than base fluid (DI water), and GO nanofluid proved to be Conclusions The collection of direct solar absorption-based nanofluids is an encouraging technique for solar power generation systems. Many studies on different types of nanoparticles for solar energy state that a comparison for photothermal performance characteristics of various nanofluids at the same experimental conditions is much needed. In direct absorption solar collectors (DASCs), the photothermal conversion efficiency of three nanomaterials (GO, ZnO, and CuO) and their composites (GO-CuO and GO-ZnO) is experimentally examined under natural sun. The contribution of optical absorption, changes in fluid volume temperature, and reduction in the mass of the nanofluids are disclosed in the perspective of their photothermal conversion efficiencies. Experimental outcomes describe that all nanofluids and their composites have higher solar energy absorption, higher temperature gain, and higher mass loss than base fluid (DI water), and GO nanofluid proved to be the best because of its strong solar absorptivity nature. A 46.61% enhancement in photothermal conversion efficiency of GO nanofluids is accomplished within experimental domain at 0.0012 wt % concentration. It is quantitatively observed during experimentation that the addition of a small amount of nanoparticle in the base fluid can significantly increase and improve its photothermal performance. Compared with the base fluid, the growing tendency and order of nanofluids with respect to photothermal performance in this experimentation is GO, GO-ZnO, GO-CuO, ZnO, and CuO.
Sinistrals are rarely “right”: evidence from tool-affordance processing in visual half-field paradigms Although current neuroscience and behavioral studies provide substantial understanding of tool representations (e.g., the processing of tool-related affordances) in the human brain, most of this knowledge is limited to right-handed individuals with typical organization of cognitive and manual skills. Therefore, any insights from these lines of research may be of little value in rehabilitation of patients with atypical laterality of praxis and/or hand dominance. To fill this gap, we tested perceptual processing of man-made objects in 18 healthy left-handers who were likely to show greater incidence of right-sided or bilateral (atypical) lateralization of functions. In the two experiments reported here, participants performed a tool vs. non-tool categorization task. In Experiment 1, target and distracter objects were presented for 200 ms in the left (LVF) or right (RVF) visual field, followed by 200 ms masks. In Experiment 2, the centrally presented targets were preceded by masked primes of 35 ms duration, again presented in the LVF or RVF. Based on results from both studies, i.e., response times (RTs) to correctly discriminated stimuli irrespective of their category, participants were divided into two groups showing privileged processing in either left (N = 9) or right (N = 9) visual field. In Experiment 1, only individuals with RVF advantage showed significantly faster categorization of tools in their dominant visual field, whereas those with LVF advantage revealed merely a trend toward such an effect. In Experiment 2, when targets were preceded by identical primes, the “atypical” group showed significantly facilitated categorization of non-tools, whereas the “typical” group demonstrated a trend toward faster categorization of tools. These results indicate that in subjects with atypically organized cognitive skills, tool-related processes are not just mirror reversed. Thus, our outcomes call for particular caution in neurorehabilitation directed at left-handed individuals. Introduction In typical right-handed individuals, the processing of information about tools takes place primarily in their left hemispheres (for reviews, see Johnson-Frey, 2004;Lewis, 2006; see also Orban and Caruana, 2014;Vingerhoets, 2014). Interestingly, in the case of tool-related manual skills, the engagement of left-lateralized processes is apparent even when an interaction with a tool is performed with the non-dominant (left) hand (e.g., Johnson-Frey et al., 2005;Króliczak and Frey, 2009). Whether or not the neural underpinning of tool-use skills in left-handed individuals (sinistrals) exhibits the same asymmetry is currently debated (Vingerhoets et al., 2012;Goldenberg, 2013). Surprisingly, this discussion takes place in the absence of systematic research on representations underlying perceptual processing of tools and other man-made objects in this often-discarded (or rather underrepresented in scientific research) population (for a review on this and other topics, see Willems et al., 2014). Although both neuropsychological (Goldenberg, 2013) and neuroimaging (Vingerhoets et al., 2012) data from sinistrals, as compared to dextrals (right-handers), point to a less asymmetric organization of functions, it is yet to be determined if such an effect is due to a tendency for all left-handers to have their brains more symmetrically organized or due to a rather higher incidence of atypical representation of functions introducing bias in the group data from this population (for a discussion, see Króliczak, 2013a). Indeed, this is quite likely given the evidence showing that up to 30% of left-handed individuals demonstrate atypical-i.e., bilateral or right-sided-organization of cognitive skills such as language (Knecht et al., 2000), praxis, or both (Króliczak et al., 2011;Vingerhoets et al., 2013; see also Meador et al., 1999). If such a pattern was a reflection of a more general organization of functions in their brains, one would predict that left-handers with atypically organized higher-order manual skills would also exhibit atypical laterality of processing underlying the categorization of tools (cf. Ochipa et al., 1989). Testing for this possibility is paramount because, in the long run, it has a clear potential to reveal handedness-independent interrelations of cognitive functions in the brain, whether typical or not. The easiest and arguably most effective way of addressing this issue is the use of a visual half-field (VHF) paradigm, which is a reliable measure of hemispheric dominance of functions when used properly (Hunter and Brysbaert, 2008;Verma and Brysbaert, 2011; see also: Garcea et al., 2012;Helon and Króliczak, 2014). In the majority of studies that were related to tool processing, however, the issue of typical and atypical representation of this cognitive skill has never been directly addressed (cf. Verma et al., 2013). Notably, one of the first reports to investigate the laterality of tool representations with the use of VHF paradigm was a paper by Verma and Brysbaert (2011), who tested their right-handed participants on a categorization task with bilaterally presented man-made objects (tools, and non-tools). Yet, the sample they used did not allow them to pose a question of typical vs. atypical processing of the tool category. Therefore, in line with previous studies that drew their conclusions only from right-handers (for a review, see: Lewis, 2006), when averaging across tests and participants, the mere effect they observed was some right visual-field (RVF) advantage for the categorization of man-made objects, including tools. A somewhat stronger effect was observed in a study that utilized a different VHF test, i.e., a lateralized masked priming paradigm, by Garcea et al. (2012), in which participants categorized centrally shown pictures of tools or animals preceded by laterally presented identical or scrambled primes. The priming effect they observed only for tools again indicated the RVF advantage for tool categorization. Given that the majority of subjects involved were right-handed, a chance of finding a subset of individuals with atypically represented tool-processing skills was neither high, nor addressed. In this study, we investigated the processing of tool-related information exclusively in left-handers, a population offering a higher incidence of individuals with atypically lateralized functions (e.g., Króliczak et al., 2011). We wanted to ensure that the to-be-obtained results would specifically concern tools as a unique type of human artifacts. Therefore, the tool categoryfor which the object concept is linked not only to the relevant functional properties of that object type but also to a set of invariant, use-related properties or stable affordances (e.g., the type of grip required when manipulating the tool in accordance with its function, Borghi and Riggio, 2009; see also Tucker and Ellis, 2004;Bub et al., 2015; cf. the micro-affordance concept by Ellis and Tucker, 2000) that trigger the relevant representations of manual skills (e.g., Vainio et al., 2008;Bub et al., 2013)-was contrasted with other man-made objects (i.e., non-tools), a wider category of human artifacts for which manipulability is no longer that important but some function is still present. Specifically, we tested: (1) whether or not a difference in visual processing of tools vs. other man-made objects would be observed in accuracy and response times (RTs) in two disparate paradigms utilizing VHF presentations, (2) whether or not the potential left-right asymmetry demonstrated in such experiments would be homogenous across left-handers or, conversely, would allow us to divide the group into two different samples showing advantage for one or the other visual field, and (3) whether or not this pattern of performance would be consistent within a group across the selected behavioral tasks. We hypothesized that a VHF advantage would be present only for the processing of pictures of tools. Specifically, we expected that our left-handed participants would split into two groups, one showing left visual field (LVF) advantage for tool processing, and the other demonstrating the typical, RVF advantage. Finally, we predicted that the processing of non-tools would be unaffected by the side of presentation (Experiment 1), or the side in which the prime appeared (Experiment 2), irrespective of the group. Experiments Although the order of the two experiments described hereone with laterally presented targets (in either VHF), and one with laterally presented primes (in either VHF)-was counterbalanced across participants, for simplicity we will nevertheless refer to the presentation of target objects in VHFs as Experiment 1, and to the presentation of primes in VHFs as Experiment 2. Both experiments were run in Action and Cognition Laboratory in the Institute of Psychology at Adam Mickiewicz University in Poznań, Poland. The study was approved by the local Ethics Committee for Research Involving Human Subjects and was carried out in accordance with the principles of the Helsinki 1964 Declaration. Eighteen healthy left-handed volunteers (undergraduate or postgraduate students, 9 women, mean age = 23.3, SD = 3.7) took part in Experiment 1 and Experiment 2, and both experiments were undertaken with the understanding and written consent of each participant. All subjects had normal or correctedto-normal visual acuity and, as established by the revised version of the Edinburgh Handedness Inventory (Oldfield, 1971;Dragovic, 2004), were strongly left-handed (mean laterality quotient = − 83.9, SD = 22.1). Before conducting any analyses we examined whether or not there are any atypical cases among our participants based on their responses to all stimuli presented to the left or right visual field. Consequently, two laterality indices (LI 1 for Experiment 1 and LI 2 for Experiment 2) were calculated for each individual in the following way: LI 1 = [(L 1 − R 1 )/(L 1 + R 1 )] × 100, where L 1 and R 1 represent RTs for targets (tools and non-tools) presented in the left (L 1 ) or right (R 1 ) VHF, respectively, and LI 2 = [(L 2 − R 2 )/(L 2 + R 2 )] × 100, where L 2 and R 2 represent RTs for targets (tools and non-tools) preceded by identity primes presented, again, in the left (L 2 ) or right (R 2 ) VHF. Each individual's LI 1 and LI 2 were then averaged to form a measure of general visual field dominance, LI G [LI G = (LI 1 + LI 2 )/2]. Participants with LI G < 0 were classified as representing left visual-field advantage group (LVF-A, N = 9, 5 women), whereas those with LI G > 0 were classified as representing right visualfield advantage group (RVF-A, N = 9, 4 women). Despite different directions of the visual field asymmetries, the groups did not differ from each other in terms of the actual strength of these asymmetries [t (16) = 0.29, p = 0.76] as measured in absolute values. Experiment 1: Categorization of Target Objects Presented in LVF or RVF Methods The design of Experiment 1 was based on that used by Verma and Brysbaert (2011) with some modifications. Stimuli The stimuli consisted of 60 line-drawings of familiar man-made objects (30 tools, 30 non-tools; the list of all pictures can be found in the Appendix 1) from the set of 400 pictures used by Cycowicz et al. (1997). They were downloaded from the website of the Cognitive Electrophysiology Laboratory (CEPL) at the New York State Psychiatric Institute and Columbia University Medical Center (http://nyspi.org/cepl/resources) with the consent of one of the authors. Half of the objects from each category (15 tools and 15 non-tools) were rotated so that the long axis of the object was deflected from the vertical by 45 • , whereas objects from the other half were rotated in the same manner to obtain a deflection of 315 • . All images were sized to 140 × 140 pixels. Procedure Before the experiment proper, participants were familiarized with all the stimuli. Images of tools and non-tools were presented in the middle of the screen on a white background. The name of the object was displayed below the picture; the name of the category-above it. Each slide was presented for 3000 ms to ensure proper familiarization with the category of the objects to be shown in the experimental task. Subsequently, a training session of 24 trials was administered, and it involved an equal number of randomly selected pictures from both categories. Participants were seated in front of the screen at a viewing distance of ∼57 cm. Each trial began with a central fixation cross (sized 1 • of visual angle) of variable (450, 550, 650, or 750 ms) duration. Next, two images of different objects belonging to the same or different category (tool vs. non-tool) were presented in the left and right visual field (starting at 3 • of visual angle from the middle of the screen; both images sized 4 • of visual angle) with a central arrow (sized 1 • of visual angle) pointing to the left or right. The role of the arrow was to indicate the stimulus to which attention should be paid to. After 200 ms, the images were replaced with black-and-white high-contrast pattern masks for another 200 ms. Similarly to the study by Verma and Brysbaert (2011), the task was to decide (as quickly and accurately as possible) whether the target object was a tool or non-tool. The arrow remained on the screen until the participant responded, but for no longer than 2600 ms after the disappearance of the masks. Participants were asked to respond bimanually with their index fingers when the target was a tool and with their middle fingers when the target was a non-tool. The reaction time, as measured by the first key press, and accuracy of this response were recorded by the software used for stimulus presentation. A 1000-ms blank screen was introduced between the successive trials. The trial structure is depicted in Figure 1. The design was implemented in SuperLab ver. 4.5.2 (Cedrus R , San Pedro, CA). The stimuli were presented on a 20 inch CRT monitor with a refresh rate of 85 Hz and a resolution of 1280 × 960. "RB-730" response pad by Cedrus was used for measuring accuracy and RTs. Every participant completed two blocks of randomly presented 240 trials with a 2-min break between the blocks. Each of the 60 stimuli was presented four times in each block: twice in the LVF (with compatible or incompatible distracters) and twice in the RVF (again with compatible or incompatible distracters). Care was taken to ensure that the two images presented in every trial were randomly paired for each participant and depicted different objects. All the collected data were analyzed with four separate repeated-measures Analyses of Variance (ANOVAs), two for RTs to correctly categorized objects and two for accuracy. In the within-subjects analyses, the factors were target location (LVF, RVF), target category (tool, non-tool), and distracter compatibility (compatible, incompatible). In the mixed analyses, we included an additional, between-subjects factor, i.e., group (LVF-A, RVF-A), in order to account for the fact that each half of our participants demonstrated the opposite overall visual field advantage. The adopted level of significance was alpha = 0.05. The required follow-up tests of simple main effects were Bonferroni corrected (marked Bf-p). For reaction times accompanying a correct categorization of objects, outliers greater than two standard deviations above or below the mean (calculated for each participant in each condition, 4.9% of all trials) were removed. Statistical analyses were carried out using SPSS 21.0 (SPSS Ins., Chicago, IL). Recognition accuracy No significant difference in average accuracy was found between the LVF-A and RVF-A group [t (16) = 0.72, p = 0.48]. In addition to a trend toward a significant main effect of target category [F (1, 16) = 4.27, p = 0.06, p η 2 = 0.21] and a trend toward a significant interaction between target category and distracter compatibility [F (1, 17) = 3.95, p = 0.06, p η 2 = 0.20], that were both reported above, there was now also a significant interaction between group, target location and distracter compatibility [F (1, 16) = 5.84, p < 0.05, p η 2 = 0.27], but none of the post-hoc tests survived the Bonferroni correction. Response times (RTs) to correctly categorized objects Again, there was no significant difference in the mean RTs between the LVF-A and RVF-A group [t (16) = 0.28, p = 0.78]. Importantly, we found a significant interaction between group, target location and target category [F (1, 16) = 6.18, p < 0.05, p η 2 = 0.28]. Namely, participants in the RVF-A group showed significantly faster categorization of tools presented in the RVF as compared to the LVF (mean RT in the RVF = 825 ms, SE = 72 ms vs. LVF = 882 ms, SE = 71 ms; Bf-p < 0.01). In contrast, participants in the LVF-A group showed a different pattern: although their responses were faster to tools correctly categorized in the LVF as compared to the RVF (mean RT for the LVF = 835 ms, SE = 71 ms vs. RVF = 866 ms, SE = 72 ms), this tendency did not reach the significance threshold after the Bonferroni correction (Bf-p = 0.13). Neither group showed any significant VHF dominance for non-tool categorization (LVF-A group: Bfp = 1.00; RVF-A group: Bf-p = 1.00). These effects are shown in Figure 2. Discussion of Experiment 1 The paradigm used in Experiment 1 provides a unique approach to investigating the laterality of mechanisms involved in the categorization (or even recognition) of stimuli of different kinds. Namely, a required cognitive decision is made on the basis of a target stimulus presented laterally, i.e., appearing exclusively in one of the two VHFs (though accompanied by a non-target on the opposite side), and thus projected only to the contralateral hemisphere. Therefore, any preferential stimulus processing observed in the left or right VHF indicates that the most relevant mechanisms, e.g., here: for the extraction of tool-specific affordances, are predominantly lateralized to the right or left hemisphere, respectively. In light of the above assumptions, the lack of the main effect of target location (or an interaction between target location and target category) observed both in accuracy and RTs to correctly categorized stimuli in the within-subjects analyses could be regarded as quite surprising. This is no longer the case, however, when one realizes that the left-handed participants we studied clearly represented two disparate groups, each demonstrating visual field advantage on opposite sides. After taking this distinctive attribute into account, i.e., by introducing into our analyses the group factor-which, notably, was independent of the task (or experiment) and stimulus type, we found different patterns of RTs to correctly categorized stimuli. The right visual field advantage for the categorization of tools observed for RTs in the "typical" (RVF-A) group is consistent with a well-established role of the left hemisphere in encoding and retrieval of visual representations of tools (e.g., Grafton et al., 1997;Perani et al., 1999;Verma and Brysbaert, 2011;Garcea et al., 2012; or tool-use skills, e.g., Helon and Króliczak, 2014;cf. Króliczak, 2013b). A trend toward the LVF advantage observed in the "atypical" (LVF-A) group for tool categorization reveals another important finding, namely that the strength of the involvement of the right-hemisphere mechanisms in processing of human artifacts-and particularly tools-varies substantially across this group of individuals. Indeed, among the subjects with the putative atypical organization of object processing (and perhaps other cognitive skills) there were two participants who despite showing a clear general LVF advantage (irrespective of the task and stimulus kind) did not reveal such an effect for tools. Therefore, it should be emphasized at this point that such a result is not an artifact of the grouping method adopted in our study. A very similar pattern of outcomes has been reported by Verma et al. (2013) in a VHF study on symmetry detection wherein participants with known atypical hemispheric dominance for speech demonstrated greater variability in the studied task, with only about half of them showing LVF advantage for the processing of symmetrical shapes. The faster categorization of tools observed in the RVF-A group in the dominant VHF and a similar (though much weaker) effect observed in the LVF-A group, as opposed to no comparable effect of any kind for non-tool stimuli, is also consistent with the idea that information about tools, in contrast to other objects (e.g., animals, houses, or graspable shapes with no function), is processed in the brain in a unique way (e.g., Chao et al., 1999;Chao and Martin, 2000;Creem-Regehr and Lee, 2005). In fact, nearly all studies on tasks involving tools in typical (usually right-handed) individuals point to the left hemisphere as the seat of their representations (including their concepts and the relevant manual skills). It is also worth mentioning that our finding of no visual field asymmetry in the accuracy or speed of categorization for non-tools is furthermore in line with numerous neuroimaging and behavioral studies, too (e.g., Biederman and Cooper, 1991;Proverbio et al., 2011;Verma and Brysbaert, 2015). These reports clearly indicate that the representations (or perhaps the mechanisms involved in categorization and/or recognition) of non-manipulable objects are organized more bilaterally. That is, none of the hemispheres seems to be preferentially involved in their encoding and retrieval. Notably, the lack of preferential involvement of any hemispheres for non-tools did not prevent our participants from being more accurate in their processing (there was at least a clear trend toward greater accuracy in the categorization of non-tools as compared to tools, irrespective of distracter's category). Although this finding may just indicate that our sample was basically more familiar with non-tool objects included in this study (and the presence of compatible distracters seemed to facilitate their categorization even more), this result goes against a hypothesis that a greater expertise with a given category of objects may be accompanied by a more localized and/or lateralized processing (such an argument seems to be tacitly assumed in many studies on tool representations). But is it really a specific mechanism rather than a more general processing stream that was tackled with the use of the VHF paradigm that we adopted in Experiment 1? Alternatively, can any results obtained with such an approach really tell us anything about the inner organization of the processes that are involved in the task of interest? In our opinion, some light on this issue can be shed by using the laterally-presented objects as primes to the centrally-displayed targets requiring subsequent categorization. This is exactly what has been done in Experiment 2. Experiment 2: Categorization of Objects Preceded by Primes Presented in either LVF or RVF Methods The design of Experiment 2 was based on that used by Garcea et al. (2012) with some modifications. Stimuli The stimuli consisted of 60 gray-scaled pictures of familiar manmade objects (30 tools, 30 non-tools, the list of all pictures can be found in Appendix 1). As in Experiment 1, half of the objects from each category (15 tools and 15 non-tools) were rotated so that the long axis of the object was deflected from vertical by 45 • , whereas objects from the other half were rotated in the same manner to obtain a deflection of 315 • . Seventy percent of additive noise was overlaid on all the pictures (for a rationale of this manipulation, see Garcea et al., 2012). All images were sized to 174 × 174 pixels. Procedure At the beginning of the experiment participants were familiarized with the stimuli in the same manner as in Experiment 1. They also took part in a training session of 24 trials, which again involved an equal number of randomly selected pictures from both categories. Each trial began with a central fixation cross (sized 1 • of visual angle) of variable (450, 550, 650, or 750 ms) duration. Next, a prime (a tool or a non-tool) was presented in the left or right visual field (starting 3 • of visual angle from the middle of the screen; 5 • of visual angle in size). In the identity condition, the prime was same as (i.e., identical with) the to-be-seen target, while in the scrambled condition it was a scrambled version of the to-be-seen target. In both conditions, in the opposite visual field, the prime was accompanied by a scrambled version of a different image from the same category (again, starting 3 • of visual angle from the middle of the screen; 5 • of visual angle in size). After 35 ms, the prime and the accompanying image were immediately replaced with black-and-white high-contrast pattern masks of the same size for 118 ms. Then, the target image (a tool or a non-tool) was presented centrally and remained on the screen until the participant made a response, but for no longer than for 3000 ms. The task was to decide (as quickly and accurately as possible) whether the target was a tool or non-tool. Similarly to Experiment 1, participants responded bimanually with their index fingers if the target was a tool and with their middle fingers if the target was a non-tool. The time of the first key press and the correctness of the response were recorded. A 1000-ms blank screen was introduced between the successive trials. The trial structure is depicted in Figure 3. The technical equipment and software used was identical to Experiment 1. Every participant completed two blocks of randomly presented 240 trials with ∼2 min break between the blocks. Each of the 60 stimuli was presented four times in each block: twice in the LVF and twice in the RVF, twice in the identity condition and twice in the scrambled condition. Care was taken to ensure that the prime and the accompanying image presented in every trial depicted different objects of the same category (a tool or non-tool), randomly paired for each participant. Similarly to Experiment 1, the collected data were analyzed with four separate repeated-measures ANOVAs, two for RTs to correctly categorized objects and two for accuracy. The withinsubject factors were prime location (left, right), target category (tool, non-tool), and prime condition (identical, scrambled). The between-subjects factor was group (LVF-A, RVF-A). The adopted FIGURE 3 | Trial structure and timing in Experiment 2. After a fixation point presented on a blank screen for a variable time interval (450, 550, 650, or 750 ms), the priming stimulus (identical or scrambled version of the target) was shown either on the left or right for 35 ms, with an accompanying scrambled image presented on the opposite side. Both stimuli were then covered by 118-ms masks. Next, the target image was presented centrally and stayed on the screen until a participant responded or for up to 3000 ms. A 1000-ms blank screen separated successive trials. level of significance was alpha = 0.05 and, if necessary, post-hoc tests were Bonferroni corrected (Bf-p). For RTs to correctly categorized objects, outliers greater than two standard deviations above or below the mean were removed (4.8% of all trials). Response times (RTs) to correctly categorized objects LVF-A group and RVF-A group did not differ significantly in the mean RTs [t (16) = 1.09, p = 0.29]. As above, we found a significant main effect of prime condition [F (1, 16) = 14.71, p = 0.001, p η 2 = 0.48] such that targets were categorized faster when preceded by identical primes as compared to scrambled primes, and a new significant interaction between prime location and prime condition [F (1, 16) = 4.35, p = 0.05, p η 2 = 0.21] such that only in the case of left-sided priming, identical primes led to faster categorization of the subsequent targets, as compared to scrambled primes (mean RT for identical primes = 606 ms, SE = 25 ms vs. scrambled primes = 630 ms, SE = 27 ms; Bf-p < 0.01). However, both these effects should be interpreted with caution, because there was also a significant interaction between group, prime location, and prime condition [F (1, 16) = 6.26, p < 0.05, p η 2 = 0.28] which clarified their nature. Namely, the findings were such that only participants in the LVF-A group responded significantly faster when primes presented in their dominant VHF were identical rather than scrambled (mean RT for identical primes = 629 ms, SE = 35 ms vs. scrambled primes = 659 ms, SE = 38 ms; Bf-p < 0.01). In the RVF-A group, this effect missed the significance threshold (mean RT for identical primes = 578 ms, SE = 40 ms vs. scrambled primes = 598 ms, SE = 37 ms; Bf-p = 0.07). Nevertheless, the impact of right-sided identical priming in the RVF-A group was revealed by a planned a priori t-test [t (8) = 2.47, p < 0.05]. These effects are shown in Figure 4. Finally, the most important significant interaction was revealed between group, target category, and prime condition [F (1, 16) = 4.47, p = 0.05, p η 2 = 0.22]. Participants who were classified as the LVF-A group responded faster when non-tools were preceded by identical compared to scrambled primes (mean RT for identical primes = 635 ms, SE = 38 ms vs. scrambled primes =657 ms, SE = 36 ms; Bf-p < 0.01). RVF-A group, on the other hand, showed a clear trend toward faster categorization of tools when they were preceded by identical compared to scrambled primes (mean RT for identical primes = 562 ms, SE = 40 ms vs. scrambled primes = 587 ms, SE = 41 ms; Bf-p = 0.06). Indeed, this effect was significant as shown by a planned a priori t-test [t (8) = 2.80, p < 0.05]. These results are shown in Figure 5. The mean RTs and average accuracy for all the conditions are listed in Table 2. Discussion of Experiment 2 Because the task in Experiment 2 involved a centrally presented target stimulus (encoded by both hemispheres), whose processing could potentially be affected by the laterally presented primes, the results obtained with this paradigm may tell us substantially less about the laterality of neural mechanisms involved in the visual categorization of man-made objects, but can potentially tell us much more about the inner organization of the processes that subserve this function. Despite a very complex pattern of results obtained in this experiment, two patterns of outcomes are clear-cut. No surprisingly, the typical (RVF-A) group responded faster following identical priming coming from its dominant right visual field, and the atypical (LVF-A) group responded faster following identical priming coming from its dominant left visual field. Yet, this was only the case when tools and non-tools were collapsed. In sharp contrast, as the most crucial outcome of our study indicates, whereas the effect of greater facilitation of RTs following identical priming in the typical group was driven primarily by faster reaction times to tools, the greater facilitation of RTs following identical priming in the atypical group was driven primarily (indeed, almost exclusively) by faster reaction times to non-tool targets. It must be clearly emphasized, though, that the latter two effects were now independent of the side in which the priming stimulus occurred. The results of Experiment 2 therefore suggest that in the case of cognitive mechanisms that are strongly encapsulated (i.e., form a relatively independent module specialized for certain kind of stimulus encoding, Króliczak et al., 2012;cf. Clark, 2009), the presence of the prime in the information-processing stream will affect the subsequent (centrally-categorized) target irrespective of priming side and regardless of which hemisphere is involved more in the categorization process itself. This is at least the case in the LVF-A group, where the categorization of non-tools was in fact facilitated (in terms of RTs) by identical primes irrespective of their location. Such a pattern of performance also implies that at least in individuals with atypically lateralized object encoding, and putatively atypical organization of other cognitive skills, (1) the concepts of man-made objects which do not have close affinity to any specific representations of manual dexterity are still organized more symmetrically (see also Experiment 1, e.g., Ishai et al., 1999Ishai et al., , 2000Verma and Brysbaert, 2011), but despite being distributed across the two hemispheres, and somewhat counterintuitively, (2) the critical mechanisms for human artifact categorization seem to be more specialized (encapsulated) for non-tool objects than for manipulable tools (whose usage requires a proper grasp and sequence of hand movements). Indeed, the presence of such a specialized mechanism may be responsible for more accurate categorization of non-tools in this particular group. Conversely, in the RVF-A group, a faster categorization of tools preceded by primes on any side implies a more specialized mechanisms contributing to the processing of this narrower category of objects, which is in line with neuropsychological and neuroimaging evidence from right-handed (most of the time having typical organization of cognitive skills) subjects (for a review, see Frey, 2008). This study clearly shows that cognitive decisions involving different categories of objects can be easily primed (e.g., Garofeanu et al., 2004;Garcea et al., 2012). Yet, the strength and direction of the effect will depend on object category-will be different for non-tools and tools-and on the mechanisms predominantly involved in their processing. E.g., the priming effect for non-tools may depend more on the overall target shape, whereas for tools, on its afforded action features, i.e., its graspability. Indeed, we expect that the priming effects would be different not only for disparate object categories, but also for the type of task to be performed, including both perceptual and action decisions (e.g., Helbig et al., 2006;McNair and Harris, 2012;Bub et al., 2013;cf. Craighero et al., 1996;Króliczak et al., 2006). General Comments The way the representations of man-made objects are organized and/or lateralized in the human brain, and as a consequence the efficiency with which they are utilized in cognitive processes is likely to depend on-among other factors such as the strength of connections between the object concept and its relevant functional properties, or the distance (the number of levels/nodes/synapses) separating such conceptual and functional knowledge-whether or not a particular type of object affordances is critically linked to the representations of manual skills (e.g., Bub et al., 2008;Pellicano et al., 2010;Proverbio et al., 2011). For example, the chair can be effectively moved closer to the body (or rather legs) with the hands but what it affords has nothing to do with skilled hand movements (thus representing a low degree of manipulability). This is probably why the concepts of tools are special: a reason being the gradually acquired privileged link between the functional knowledge and the knowledge of the relevant movements (i.e., manual dexterity) that comes into play with deft tool use. Such representations and/or links between them are clearly absent in kids who can already name tools but cannot effectively use them, not to mention pantomiming their use (O'Reilly, 1995;Landau et al., 1998). In right-handers, most of the mechanisms underlying tool categorization and/or tool use abilities, as well as the processes that enable orchestrated interactions of the disparate and often differently localized mechanisms involved in dealing with this subcategory of human artifacts, are lateralized to the left hemisphere. Of course, things may change substantially when a preference for using the left hand gets factored in the build-up of their representations. Hence, in some left-handed individuals tool concepts seem to have greater affinity to the right-hemisphere mechanisms underlying hand dominance, although, as our Experiment 1 shows, in the majority of sinistrals this is not the case (cf. Króliczak et al., 2011;Vingerhoets et al., 2013). If the former happens, though, this does not necessarily entail that the representations of other man-made objects are automatically reorganized, shifted and/or moved to the opposite (left) hemisphere (as clearly demonstrated by Experiment 2). Indeed, the mechanisms invoked during interactions with non-tools may in such cases depend further on the more distributed, bilateral processing, being at the same time less prone to local one-sided injuries. If we assume that tool concepts form only a unique subset of the category of man-made objects including non-tools, or there is a substantial overlap between the two categories, then a very counterintuitive idea emerges. Indeed, this idea is of paramount importance for the neurocognitive rehabilitation of apraxia (cf. Oliveira and Brito, 2014). Namely, this study suggests that in patients with atypically organized skills the most effective way of alleviating tool-related conceptual and/or motor deficits that would follow right-hemisphere damages might be targeting first their relatively preserved skills to deal with non-tools. After all, as we demonstrated, some of the processes involved in the categorization of non-tools (see Experiment 2) are in such individuals organized quite similarly to the mechanisms invoked directly during the categorization of tools (see Experiment 1). This study as a whole convincingly shows that individuals with atypically organized cognitive skills are not just mirror reversed images of typical subjects (cf. Lewis et al., 2006). This is particularly true about the way the representations of tools are encoded and retrieved in the atypical brain. Notably, although an objective method was used here to divide participants into groups (which happened to be equal) with typical and atypical laterality of object categorization, this should not be construed as evidence that 50% of our left-handers demonstrated atypical laterality of tool processing. Depending on how this issue is approached, e.g., based exclusively on Experiment 1 or Experiment 2, only 38.9% or just 33.3% of sinistrals, respectively, demonstrated the atypical left-visual field (right hemisphere) advantage for tool categorization (consistent with Króliczak et al., 2011). Based on both experiments, there is evidence to indicate that the atypical group seems to possess more refined representations of non-tool objects, despite the involvement of both hemispheres in the processing of such human artifacts. In contrast, individuals with typically organized brains possess more fine-grained representations of tools whereas the non-tool category seems more diffused. Indeed, in our opinion, equivocal effects that were likely obtained while testing left-handers are to blame for the exclusion of sinistrals from scientific research and the lack of interesting reports on their cognitive skills (see also Willems et al., 2014). Limitations of the Study It would be of great interest to test whether or not individuals with atypically organized tool processing would also demonstrate atypical (i.e., bilateral or right-sided) organization of language skills. This could have been easily tested using the VHF paradigm as shown by Hunter and Brysbaert (2008). Based on Króliczak et al. (2011), we expect that no more than 25% of these participants would show atypical language laterality. In the context of Experiment 1, it would be desirable to include a third type of distracter, i.e., a neutral one, in order to further investigate the possible facilitation or interference effects. In Experiment 2, on the other hand, the inclusion of incongruent primes (i.e., representing objects from the other category) could shed some new light on the efficiency and perhaps the more detailed organization of mechanisms and processes involved in the categorization of man-made artifacts. Conclusions Although dextrals were not included in this project, the results we obtained clearly suggest that dividing study participants based on hand dominance, not to mention the exclusion of sinistrals, makes no sense. A much more reasonable approach would be to group subjects into those representing typical and atypical laterality of cognitive skills. Such a change in the recruitment, inclusion, and assignment process could in fact lead to new and hopefully more adequate models of the organization of functions in the healthy brain, which in turn could generate new approaches to neurocognitive rehabilitation. By the same token, these results also indicate that collapsing across all left-handed individuals in fMRI analyses might not be the most advisable strategy. Author Contributions This project was conceptualized by BM and GK. Data was collected by BM, and analyzed by BM and GK. The manuscript was written by GK and BM.
Analysis the Role of Media Perspectives on General Communication and Islamic Communication Communication is the process of delivering messages by someone to other people to tell, change attitudes, opinions or behavior either directly orally or indirectly through the media. In this communication requires a reciprocal relationship between the delivery of messages and recipients namely communicators and communicants. (Hasbullah, 2018) Communication can maintain and move lives. He is also a mobilizer and tool to describe the activities of society and civilization, he can turn instinct into inspiration through various processes and systems for asking, ordering and supervising, he can create a place to store ideas together, strengthen feelings of togetherness by exchanging news and changing the mind becomes an act that describes every emotion and need from the effort to preserve the simplest life to the very scientific human endeavor or the attempt for destruction. Communication is a combination of science, organization and power in the form of a common thread that starts from memory to the noblest appreciation in an effort to lead a better life. Even communication and explanation of information can change the strength of a nation to be negative and positive. So that gave birth to a new paradigm. A communication with the dissemination of news, information and delivery to the public and the public often get a variety of different responses and responses if the delivery does not have a foothold of principles and codes of ethics that are accurate and true, even some that cause slander and disaster. Almost a noble family in ruins, if Allah did not send verses that show the falsity of the news. The verse is a warning to Muslims and Muslims not to easily spread false / vile news or information and does not contain the truth. The Word of God Almighty: Meaning: Allah warns you (do not) re-do such things forever, if you are a believer. Allah explains His verses to you. and Allah knows the Wise. Verily those who want to make (the news of) such abominable deeds spread among those Abstract who believe, for those who suffer the doom in the world and the hereafter. and Allah knows, while, you do not Know. . Presumably, these are some of the verses that came down to provide principles and ethics of communication, especially with regard to the spread of news and information in the midst of society. Shaykh Makarim Shirazi explained the painful punishment in the world as the necessity of severe legal sanctions with the laws in the world. So, spreading false news must be considered a criminal act. On this basis the author wishes to examine how ideally the principles of communication in the Koran should be used as a foothold in conveying, whether information, news, or messages especially those concerned with religious matters. Because al-Qur'an must be believed and felt as: Instructions (guidance for humans in general, and those who fear in particular). Furqān (distinguishing between right and vanity, between real and fictional, between absolute and relative) Raĥmat (spreader of love) Syifâ (antidote, especially for restless and restless hearts), Mau'iżah (advice, advice), advice) Zikr li al-'Â amâmîn (warning to all nature) Tibyânan li kulli syai '(details for something) and so forth. All of these give an indication that the Qur'an is a holy book that has many dimensions and offers very broad insight. In this research, we will see how the principles or rules of communication found in the Qur'an, both directly and indirectly. This does not mean comparing or even equalizing both, because in essence it must be all humans can socialize the content of the Qur'an as a holy book. However, the sharp concern in this case is how the concepts presented by God and how the reality that occurs and applies in the West. Especially the non-Islamic world which incidentally does not use the basis of his life with the Qur'an. II. Research Method This research is in the form of a pure library (library), in the sense that all data sources come from written materials related to the topics discussed. Because this study deals with al-Qur'an / revelation directly, then the first and main source is al-Qur'an. The fluidity of secondary data sources, namely books, including magazines related to this paper that have been considered scientific and have been published. The theory used is Intrapersonal Communication, namely: Information processing. This intrapersonal theory has four systems, namely: Memory Thinking, Perception Sensation. These four systems describe how people receive information, process it, store it and produce it again. The fourth series of these systems can be concluded that the explanation is: Sensation is the process of capturing stimuli. Perception is the process of giving meaning to sensations so that humans gain new knowledge. Giving is storing information and calling back. Thinking is processing and manipulating information to meet needs or responses. III. Discussion Technological and industrial revolution that resulted in changes in the exchange of human life and the nature of the press itself. There are sharp criticisms directed at per situ itself because the mass media is growing both in quality and quantity. There are professional developments where journalism has amazed human thought and education. It is clear that the target-target of notification and providing news is something realistic in the lives of those who consider the world of journalism to make a significant contribution. In addition to the above, including the principle of communication found in public communication and in the West is freedom. Westerners regard "freedom" as a human right that cannot be contested, even the government must not interfere. The struggle for freedom of expression is a longstanding struggle by groups of people and individuals for their political environment. It can be seen that public communication or that is used in the western world views that communication and information are commodity goods that can be traded. In every communication activity the greatest benefit is obtained by the main communicator who masters information. Everyone is free to express opinions both verbally and writing without any obstacles and consideration to the values held by other parties. Whereas according to the perspective of Islamic communication is, the greatest advantage in delivering information lies with the communicant (the target of information) not the communicator. Submission of information is essentially aimed at realizing the happiness and well-being of the individual or society that is the target of communication. Besides that, freedom of communication must be coupled with a sense of responsibility and be accompanied by values shared by the nation and state society. This liberal system developed in the seventeenth and eighteenth centuries as a result of the emergence of the Industrial Revolution and major changes in the thoughts of society in the west at that time, better known as the century of division. According to this theory humans basically have the natural rights to teach the truth and develop potential when given the clemency of opinion. This is not possible if there is government control. This theory developed in the West which adheres to the philosophy of "Laiser Fair Laiser Passer". The basic principle of Liberalism can be seen from the view of this school regarding the nature of man, the relationship between man and man, with society and the State, as well as the nature of truth and knowledge. According to the ideology of liberalism, humans are essentially born as free creatures that are controlled by reason and reason, the happiness and welfare of individuals is the goal of society, nation and state. Thus the most important thing that stands out in this system and principle is freedom of expression, freedom of speech, freedom of conduct, as long as it does not interfere with the interests of the State and nation. The issue of freedom (in this case including the press) should not only be seen as an aspect of freedom of expression, but also as a need to protect institutions which are important functions in protecting the economic, political image and including towards the changes desired by society. Therefore press freedom in the West is a history of suppressing unpopular ideas. Thus the State as the highest authority over political, economic and social truth has more to do than in its own name to maintain its existence. The freedom that is carried in Western countries in disseminating information, ideas and so on depends very much on the needs, both collectively and individually. The benefits achieved from such freedom do not harm any party in general. It seems that this condition is a bias of democracy which is always touted. Freedom of human behavior is not prohibited as long as it is still within the limits of democracy. In the context of increasing freedom in reporting, the delivery of communication, especially press freedom at the international level, especially in the West, has been raised since 1893 through various international conferences held by journalists. But the meeting did not produce anything significant. The meeting was nothing more than just issuing a resolution or pledge to fight for the press. Very few tangible results have been achieved. Whereas what they had been able to complete had been erased again by propaganda during the two world wars that arose later. Ideally, the expected freedom of the press is: Prohibition of government intervention into the press in the form of concrete actions that can disrupt the smooth flow of information, and the principles of limiting press freedom must be applied through the court, that the court is authorized to carry out punishment. Freedom in conveying news and information, freedom of the press should not be detrimental to certain parties, even though freedom for the press is owned but still in its essence, namely truth. In exercising freedom, especially the world of the press very much helps the needs of the people and society including in developing countries. The press really should be able to double the change, accelerate change, become a democratic institution, a reintegration institution and a provider of institutions giving norms and new cultures. Fulfilling the function of freedom in the West in the context of freedom of the press will certainly help people overcome the symptoms of social change. But there is still the fact that freedom is still controlled or intercepted by the authorities, especially countries that are still led by the King. In the perspective of Islamic communication based on the Qur'an and al-Hadith, it seems to be very tolerant in limiting these principles, as long as they remain within Islamic criteria, and can be detailed as follows: This term is widely used in receiving and delivering religious news, especially Hadith. People who do not have and do not have the nature of prudence and the accuracy of the impact and influence of the news delivered without accurate, news from the wicked must be clarified (tabāyun). The principle of Islamic communication must be based on al-ihthiyat and al-tsiqah, namely: caution and accuracy. Communicators are highly required to have the nature of trustworthiness and honesty not only in accessing news, but in all respects, do not have a dual and hypocritical nature. Also he must really maintain the qualities that can eliminate selfesteem. When he is dealing with problems he does not engage in data manipulation. He not only relies on accurate data and data, but more deeply his personality is scrutinized. His words and actions must be in line. This is where the difference with the principles adopted in the Western world. The most important thing is accurate data and news. His personality and role models are not in the spotlight. The basic foundation of Islamic principles is that it can be seen from the stringent requirements set by Islamic scientists in receiving news, information especially relating to the teachings of Islam, including the hadith of the Prophet Muhammad SAW, transparently stipulating the news he was carrying if he was not careful, he would definitely reject it even though he brought data rational. Because the main criteria that are in the spotlight are the traits attached to one's personality, namely caution. Because the problem of a clean personality is the main assessment and greatly determines the existence of what he conveyed. Thus, if it is related to the world of information, especially the world of journalism, it is ideal that the code of ethics in selecting hadiths can be coupled with the requirements of a communicator in accessing news. Because thus it will be more awake from the dangers of the intended manipulation of the news. An-Naaqid ‫ألنـــاقد(‬ ) A communicator said that what is right is right and what is wrong is wrong. This is an obligation for every individual. One of the main principles and ethics of communication is to make corrections and criticisms that can build up on things that are not according to what they should, both in terms of applicable laws and according to the ethics and norms that live in the community. The press should ideally function as the keeper of the truth in the middle of the reader. All deviations must not be allowed. By way of the press doing criticism and correction so that irregularities do not take place again. Allowing distortion in the middle of society is tantamount to letting the community suffer. In the Koran the task of conveying the truth is an order that must be carried out by each individual through his own work or organization. Al-Qur'an itself in the form of presentation also illustrates the commands, prohibitions and also statements and information of previous people in various stories. Not only are the stories good and successful, but there are also stories that fail in carrying out its mission. Both sides of the story are intended so that humans can learn, so that bad events are not repeated and successful events can be repeated and examined after being modeled again by living people in the next age. According to the verses of the Qur'an, a believer is asked to carry out an obligation in the form of work inviting others to do good, telling people to do good and forbidding people to avoid evil. As found in the Qur'an Al -li Imran: 104. Meaning: Let there be among you a group of people who call for virtue, order to the ma'ruf and prevent from evil; they are the lucky ones. Amar ma'ruf nahi munkar's work is done with only one goal, which is to become a successful person or win. If observed from this verse it is true that not every individual believer is required to carry out this command, because of his differences in ability. However, essentially every individual has an obligation to preach in accordance with their abilities. The ones who are told to appear professional are just a part of many believers. Word: Ummah in this verse is interpreted by Mustafaf Al-Maragghi with: A congregation consisting of individuals who have attachments to one another like members of the human body. The Ummah is a special group (professional) who is able to carry out the invitation (al-Da'wah), able to order and be able to prevent. Everyone has desires and works. To realize the improvement of the Ummah, even in the early period of Islamic development, all people participated in carrying out corrections and criticisms of the existence of irregularities and abuse of office. Penguasapun must be open-hearted to be corrected by the people. Once when Umar bin Khattab preached "If you see any deviation in me, correct you" then a goat herder stood up and immediately asked and answered "If we see you have irregularities we will straighten you with our swords" this means the people have authority to improve the situation and be responsible for carrying out the da'wah duties. People who are required to preach are people who have certain requirements. There are at least four conditions that are possessed to become preachers who excel, namely: First: Having knowledge about the Qur'an and Sunnah and the history of the Companions and Khulafa al-Rashidin, Second: Knowing the situation and condition of the audience, abik facet adapt customs, character and morals as well as matters that are sociological, Third: Knowing the language of the people, Fourth: knowing the ins and outs of religious streams and differences of opinion of the people. The actors of communication, especially those who are involved in the world of journalists, belong to the people who are said to have a role in making changes to the good. Because the reporters have the same mission with the da'wah as found in al-Da'wah. In this case according to the opinion of the writer, here there are differences in the principle of public or Western communication, where they consider making corrections, criticisms and changes only limited to attitudes and as the responsibility of personnel and or individuals. Whereas in Islam, it is part of the fundamental religious teachings and collective obligations. Criticizing leaders, officials, both formally and informally is a competent obligation. In this case, the mass media, for example, who criticize anyone, must have a mission and goals of the interests of the people together, not one-sided interests that are only used by certain people. As an indicator of the behavior above is going to give birth to humans, society or the best group (khaira ummah) for other people (believers), of course the 2154 khaira al-ummah group can be realized if they are able to carry out their mission as telling others to do good and able take precautions against people who commit evil. This is found in the Qur'an in the Letter of Imran: 110. Meaning: You are the best people who are born for human beings, commanded those who are sorry, and prevent those who are evil, and have faith in Allah. if the experts of the Book of faith, surely it is better for them, among them there are believers, and most of them are people who are wicked. In essence, as long as the faithful do not carry out these tasks, so long as they are also considered as good people. So the measure of kindness towards people and a person is if he has a concern to improve people for the better. In the Qur'an, there are really many commands to do good in various matters of life and worship and verses in the form of a prohibition against bad deeds. All of this is a picture of al-Qur'an's concern as a source of Islamic teachings to improve the condition of society in order to achieve a better and more perfect quality of life, both the world and the hereafter. The obligation to uphold the truth, correction in society is a call to someone without exception. The Hadith of the Prophet Muhammad stressed: Meaning: If any of you see evil, he must change it with his hands. If he cannot, change it with his tongue, if he still cannot change it with his heart, and that is the weakness of faith. (HR. Muslim) Changing confusion with the tongue in the above hadith can be understood as the authority to express opinions verbally and in writing, so that the possibility does not occur. The definition of "Fal yughoyyirhu" can be understood to change the situation, from small events which are still formed by symptoms to become no bigger, or when disasters occur to change so that disasters take place no longer. The above hadith is firm enough to provide legality for someone in fighting for the truth. Tongues and pliers which are used as media in da'wah through communication channels are interpersonal or missal. Forms of change and improvement through communication according to William L. Rivers is, among others, by presenting a complete and accurate picture of the world to the reader. Any behavior that conflicts with this goal is suspect. The attitude of courage to uphold the truth is the main mission of people who work at the press institution, especially its ethical foundation, it can be seen to what extent he has concern for the fate of the public and dares to reveal the truth in accordance with facts and reality. Constructive criticism and correction that is expected to be realized by the communicator especially the media can fortify a variety of organized and unorganized crime. And not just bring down certain parties. Al Hikmah ( ‫كمـة‬ ‫حــ‬ ‫أل‬ ) Professional communicators are encouraged to have the character of Al-Hikmah in carrying out its mission. Both those who work as journalists, preachers, teachers, managers and leaders in very small matters. In reality. Hymns in receiving and spreading and conveying information, there are differences between Western and Islamic views. In my opinion, the point of difference here is that Westerners understand wisdom in logic and reality. Its size is a match on reality at the time and was accepted by many people. The wisdom displayed in conveying the information referred to here is the timeliness, conditions, place of who is desired, which in essence no one is harmed. Whereas in the perspective of Islam al-ĥikmah can be viewed from several points of view. The meaning can be understood from the language approach, for example, law or philosophy. However, the main discussion in this paper is the wisdom in the Qur'an. The expression of wisdom in the Koran has several purposes that fit the context of the sentence. In addition, the expressions have their respective characters based on these terms. However, in general the character can be returned to the meaning of the person who has wisdom (alhikmah), so that he avoids actions that cause remorse. Sentences of wisdom in the Qur'an are addressed to many Prophets and Apostles. But the wisdom referred to here implies As-Sunnah, knowledge of halal and haram, about secrets that are not known to the layman. With this advantage, a Prophet and Apostle are given the task to call on the path of Allah. As can be understood that the wisdom originated from and sourced from the book of Allah and the sunnah of the Prophet and saw the positive impact on the actions of people who get and be granted wisdom. While those who are not awarded wisdom, those who always make damage and destruction of this people, because of the attitude that causes to his people. Lafaz In interpreting the wisdoms of the mufassirin, they give different interpretations and thoughts. And they explain it as follows: a. As the task of the Apostles O our Lord, send to them an Apostle from their circle, who will recite to them your verses, and teach them the Book (al Qur'ān) and Al-Hikmah (As-Sunnah) and purify them. Verily, You are the Almighty, the Wise. According to Ibn Abbas, what is meant by "al-kitab" is al-Qur'an, while what is meant by "al-hikmah" is Sunnah. And he also said that wisdom is fiqh which contains the provisions of halal and haram and the advice of the Qur'an. Referred to as a judge for wise people, because it prevents from ignorance. Muhammad Abduh said: Wisdom can be interpreted as a book or al-Qur'an, wisdom also means sunnah in general. Wisdom also means the secrets and benefits of something. If the teachings of the Prophet about wisdom are interpreted as the Prophet's teachings about the fiqh of the Islamic religion, then the important meaning of the hymn is the attitude of Islam. The Prophet taught them the Koran, in addition to the secrets of the objectives of sharia with various charities to Muslims. So that it can be used as an example for them, both deeds and words. b. Advice Including the interpretation of scholars about the meaning of al-hikmah is advice. This can be seen from various verses of the Holy Qur'an. For example in the safe letter of Luqman in this letter Allah explained the wisdom in the form of his advice to his child. The wisdom that is found in this Letter of Luqman Allah gave him as secrets that may not be given to others, so that his advice to his children is very actual to be carried out by anyone. The strengths that Allah gives to Luqman are God's secrets with no known reason. His wisdom in conveying religious teachings made him put as a successful parent. The languages he uses are very polite and wise, touching the audience when listening to them. The emphasis that is done at the earliest stage is the emphasis on monotheism, then finally the stage of morality and social institutions of human society. In particular, there are many characteristics of people who have wisdom, such as virtue, including: Reasonable, knowledgeable, intelligent, intelligent, having sharp inner vision, understanding, just, having noble character, honest, tawadhu ', not paying attention to other people's disgrace, short talk subdue lust and so forth. Himah is a source of power to initiate alternative problems from mistakes in thinking and doing. Can avoid things that will be misleading, both in terms of aqeedah, sharia, morals, personal issues and the wider community. With the concept of wisdom, everything will facilitate the process of communication between humans, both directly and indirectly. The real wisdom is a gift from God that not everyone gets. Business and theory to get owned by someone is not necessarily he managed to get it, because wisdom is an absolute right of Allah. People who are given wisdom means he has been given a lot of glory and goodness. A reliable communicator supported by wisdom skills will facilitate him to make various approaches that are perfect in the middle of this life and will be more protected from mistakes, gaps as well as defamation. Thus, it can be concluded that to obtain it is actually a matter of closeness to God. This is actually not a requirement for a journalist, journalist or as an orator. In fact, this is never mentioned as a character, even though here there are advantages to success. In the west the most priority is objectivity. If objectivity is achieved, then the level of success is considered to meet the standards. News that is always disseminated to the community in the past is news and information that is expected not to offend and does not harm certain parties transparently. Then whether the content of the news is from people who are close to the teachings of their religion, whether the contents of the news have accurate data and can be accounted for in a legal and moral manner. IV. Conclusion Communication is actually an activity carried out by someone to convey a message to another person, so that the person does what is meant by the person delivering the message. The essence of communication lies in the similarity of intentions or changes in the behavior of objects or targets. For its socialization as a science found several theories. In the Koran found symbols, words that signal communication as a necessity for humans in their lives. In the course of human life, communication has a long history and varied forms and patterns. In the days before the Apostle, it was still done simply, that is by way of direct confrontation, and further information was carried out in the form of correspondence, this was until the time of the Apostles, Friends and Tabi'in. This fact can be seen when the Apostles and the Shabat spread the wings of Islam by writing a letter first to the intended audience. It is recorded by historians that the Apostle in delivering his missionary communication through letters 105 times. And the media used as a means of communication are increasingly sophisticated in accordance with the times and conditions so is Islamic communication. The terminology of communication in al-Qur'an includes, among others: Al-Ittiāşal, al-I'lam, al-Tabshīr, al-Da'wah, al-Bayan, al-Naba, al-Khabar, al-Qaul. This sentence is understood as the basic word of communication which is the basis for making that communication exist in the Qur'an. The principles of communication found in al-Qur'ān are:
Improvement of Butamben Anesthetic Efficacy by the Development of Deformable Liposomes Bearing the Drug as Cyclodextrin Complex This work was aimed at enhancing butamben (BTB) anesthetic efficacy by the “drug-in cyclodextrin (CD)-in deformable liposomes” strategy. In the study, phase-solubility studies with natural (α-, β-, γ-) and derivative (hydroxypropyl-α-and β-, sulfobutylether-β, methyl-β) CDs evidenced the highest BTB affinity for βCD and its derivatives and indicated methyl-βCD (RAMEB) as the best carrier. Drug-RAMEB complexes were prepared by different techniques and were characterized for solid-state and dissolution properties. The best BTB–RAMEB product was chosen for entrapment in the aqueous core of deformable liposomes containing stearylamine, either alone or with sodium cholate, as edge activators. Double-loaded (DL) liposomes, bearing the lipophilic drug (0.5% w/v) in the bilayer and its hydrophilic RAMEB complex (0.5% w/v) in the aqueous core, were compared to single-loaded (SL) liposomes bearing 1% w/v plain drug in the bilayer. All vesicles showed homogeneous dimensions (i.e., below 300 nm), high deformability, and excellent entrapment efficiency. DL-liposomes were more effective than SL ones in limiting drug leakage (<5% vs. >10% after a 3 months storage at 4 °C). In vivo experiments in rabbits proved that all liposomal formulations significantly (p < 0.05) increased the intensity and duration of drug anesthetic action compared to its hydroalcoholic solution; however, DL liposomes were significantly (p < 0.05) more effective than SL ones in prolonging BTB anesthetic effect, owing to the presence of the drug-RAMEB complex in the vesicle core, acting as a reservoir. DL liposomes containing both edge activators were found to have the best performance. Introduction Butamben (BTB) is an ester-type local anesthetic agent utilized in topical, dermal, and mucosal formulations. Poor water solubility and short duration of action with respect to the potential duration of pain are the main drawbacks limiting its use and therapeutic efficacy. All parenteral products containing BTB have been recently removed from the market, as it was considered unsafe or not effective, probably due to its very low water solubility [1]. There was then a strong need to develop new effective topical delivery systems of BTB that are able to enhance its solubility, improve and adequately modulate its release, in turn prolonging its anesthetic effect and limiting possible systemic toxic effects. Phase Solubility Studies An excess amount of BTB was added to 10 mL of water containing growing quantities of CD in sealed vials electromagnetically stirred (750 rpm) at 25 ± 0.5 • C up to equilibrium (72 h). Aliquots were taken with a filter syringe (0.45 µm pore size) and spectrophotometrically analyzed for drug content at 287.0 nm (UV/VIS 1600 Shimadzu, Tokyo, Japan). The presence of CD did not interfere with the spectrophotometric assay of BTB. Each test was carried out in triplicate (C.V. < 3%). The apparent 1:1 binding constants of the various BTB-CD complexes were calculated from the slope of the straight lines of phase-solubility diagrams, according to the following equation [39]: Preparation of BTB-CD Solid Systems Equimolar BTB-CD solid systems with the selected CD were obtained by different methods: Samples were stored in a desiccator and sieved (75-150 µm granulometric fraction) before use. Dissolution Studies Dissolution profiles of BTB, both alone and from its various binary systems with the selected CD, were determined according to the dispersed amount method, following the experimental conditions used in previous studies, in order to obtain comparable results [29]. Experiments were performed at 37 ± 0.5 • C by adding 600 mg of drug or drug-equivalent to 100 mL of water, in a 300 mL beaker. The medium was stirred at 100 rpm by glass three bladed propeller. At fixed intervals, aliquots were withdrawn and BTB content was assayed as described above (see Section 2.2). The same volume of fresh medium was added to keep the volume of dissolution medium constant. A correction was made to take account of the cumulative dilution. Each test was repeated three times (coefficient of variation < 5%). Dissolution behavior was characterized through the percentage of BTB dissolved at 10 min, as an index of the dissolution rate, and the dissolution efficiency at 60 min, as an index of the process overall. Dissolution efficiency (DE) was calculated according to Khan [40]. Preparation of Liposomes Liposomes were prepared according to the thin layer evaporation method. The lipid phase components, put in a round-bottomed flask, were dissolved in chloroform, which was then removed by rotary evaporation under reduced pressure at 58 • C. The resulting thin film, after further drying to completely eliminate any residual solvent, was hydrated with the hydrophilic phase (10 mL water), heated for 10 min to 58 ± 1.0 • C, and then vortexed for 2 min; the treatment was repeated, performing 5 cycles in total. The resulting vesicles were then sonicated for 5 min (Sonopuls HD 2070, 300 W power, probe MS 72, Bandelin Electronic GmbH, Berlin, Germany). Drug-loaded liposomes were obtained through the following methods: (a) 1% free drug was dissolved in the lipophilic phase, and water was used as aqueous phase (single loading, SL); (b) the BTB-CD complex was dissolved in the hydrophilic phase (10 mL water) at 0.5% and the remaining 0.5 was dissolved as such in the lipophilic phase (double-loading, DL). Characterization of Liposomes The mean diameter, polydispersity index (PDI) and zeta potential of freshly prepared vesicles were determined by Photon Correlation Spectroscopy (PCS) using a ZetaSizer Nano ZS90 Malvern Instruments Ltd., Malvern, UK) set at 25 ± 0.1 • C after proper dilution with distilled water to prevent multiscattering phenomena. For each dispersion, six independent samples were collected, each of which was analyzed four times. The average size distribution was then determined, referring to the mode, which is the value best approximating the vesicle mean diameter. For vesicle charge determination, each liposomal dispersion, suitably diluted with distilled water, was dropped into the ZetaSizer electrophoretic cell, and the zeta potential was determined by electrophoretic mobility measurement. Each sample was measured six times at 25 ± 0.1 • C. Morphological examination of the liposomal dispersions was performed by transmission electron microscopy (Philips, TEM CM 12, Andover, MA, USA) and scanning electron microscopy (FIB-SEM Gaia 3, Tescan s.r.o., Brno, Czech Republic). A drop of sample was stained with 2% w/v phosphotungstic acid solution and placed on copper grids with carbon films for viewing. Entrapment efficiency (EE%) of loaded liposomes was indirectly determined by separation of the free drug from the vesicles by the dialysis method. Following a previously developed procedure [12], a sample (3 mL) of drug-loaded liposomal dispersion was put into a cellulose acetate dialysis bag (Spectra/Por ® , MW cut-off 12,000, Spectrum, Missis-Pharmaceutics 2021, 13, 872 5 of 17 sauga, ON, Canada), which was sealed and immersed into a closed vessel with 150 mL of distilled water at 20 • C, magnetically stirred at 30 rpm. Samples of the receiving medium, withdrawn at time intervals, were replaced with an equal volume of fresh solvent and spectrometrically assayed for drug content, as described above. The experiment was stopped when constant drug concentrations were obtained in subsequent withdrawals. The percentage of encapsulation efficiency (EE%) was calculated according to this equation: Three separate experiments were performed and the results are given as the mean ± SD. Elasticity of vesicles was assessed by a LiposoFast-Basic membrane extruder (Avestin GmbH, Mannheim, Germany) connected to a 3 atm pressure source. The vesicles size was determined as a mean of five experiments, before and after 11 times extrusion through a 100 nm pore size nitrocellulose membranes (Isopore, Millipore, Bedford, MA, USA). The results are the mean. Stability Studies Stability of drug-loaded liposomes was checked for 3 months. The vesicle dispersions were kept at 4 ± 1 • C, and, at fixed time intervals, they were examined for size, polydispersity, and zeta potential. Drug leakage at the end of the storage period was also determined. Gel Preparation Liposomal dispersions were formulated as Carbopol gel and tested for in vitro drug permeation properties and in vivo drug anesthetic effect. For gel base preparation, 0.5 g Carbopol 940 was added to 99.5 mL of bidistilled water, under constant stirring for 24 h at room temperature; gelation was then achieved by triethanolamine addition up to neutral pH. The gel was kept in closed containers, at 4 • C, away for the light. Drug loaded-gels were obtained by a previously reported technique [12]. Briefly, the Carbopol gel base was carefully mixed (50/50 w/w ratio) with 1% BTB as liposomal suspension or as simple hydroalcoholic solution (used as reference), obtaining in all cases a final drug concentration in the gel of 0.5% (w/w). In Vitro Permeation Studies through Excised Animal Membrane In vitro permeation studies were carried out using vertical Franz diffusion cells (Rofarma, Gaggiano, Italy) and rabbit ear skin (obtained from a local slaughterhouse) as a percutaneous absorption model, according to a previously developed method [29]. Briefly, the skin, after depilation, was excised from the connective tissue, washed with water, gently dried with a filter paper and then preserved at −25 • C. Before use, the skin was thawed, prehydrated for 1 h with pH 7.4 phosphate buffer solution, and then placed in the diffusion cell, with the stratum corneum side facing the donor chamber, and the dermal side the acceptor medium (14.5 mL of pH 7.4 phosphate buffer solution, thermostated at 37 ± 0.5 • C and kept under magnetic stirring at 50 rpm [29]). The donor compartment contained 0.15 g of liposomal dispersion or 0.3 g of gel. At predetermined intervals, 0.5 mL samples were collected from the acceptor compartment and replaced with equal volumes of fresh acceptor medium. The drug content was spectrophotometrically determined as described above (see Section 2.2), and the concentration was corrected for the cumulative dilution. The drug permeability coefficients (Kp, cm/h) were determined according to this equation: where J is the drug flux through the skin (mg/h.cm 2 ), calculated from the linear portion of the cumulative amount of drug permeated per unit area versus time plots, and Cd the initial drug concentration in the donor compartment. Results were expressed as means ± standard deviation (n = 6 independent samples). In Vivo Studies The anesthetic effect of BTB formulated in aqueous Carbopol gel, as hydroalcoholic solution, due to its very poor water solubility (reference formulation), or loaded in liposomes, was tested in albino rabbits through the conjunctival reflex test [30,31]. Male albino rabbits weighing 2.5-3.0 kg (Morini, San Polo d'Enza, Italy) were utilized. A single rabbit was housed per cage and placed in the experimental room 24 h before the test, for acclimatization. Rabbits were kept at 23 ± 1 • C with a 12 h light/dark cycle, under standard diet regimen and free access to water. All studies were in agreement with the Directive 2010/63/EU of the European Parliament and of the European Union Council (22 September 2010) on the protection of animals utilized for scientific purposes. The ethical policy of Florence University follows the Guide for Care and Use of Laboratory Animals of U.S. National Institutes of Health (NIH Publication No. 85-23, revised 1996; Florence University assurance number: A5278-01). The Animal Subjects Review Board of Florence University (A3678, 2017) approved the experiments, which were carried out according to ARRIVE guidelines [41]. Everything has been done to limit animal suffering and the number of animals used. For each formulation, a group of six rabbits was used. A constant dose of each sample was instilled in the conjunctival sac of the rabbit left eye, while an analogous placebo formulation was instilled in the right eye as control. A cat whisker was used at suitable times to cause the conjunctival reflex. The intensity of the drug anesthetic action is directly related to the number of stimuli required to induce the reflex. Statistical Analysis ANOVA coupled with the Student-Newman-Keuls multiple comparison post-test (GraphPad Prism, Version 6, GraphPad Prism Software, San Diego, CA, USA) was used for statistical analysis of data. Differences with a p-value less than 0.05 were considered statistically significant. Phase-Solubility Studies The proper choice of the most suitable CD to use for drug complexation is critical both to exploit at the best the benefits of the "drug-in CD-in liposome" strategy and to avoid possible problems of vesicle destabilization due to the CD presence [16,[35][36][37]. Then, as a first step of this work, phase-solubility studies of BTB with a series of natural and derivative CDs were performed. The results of these studies ( Figure 1) showed that the solubility of BTB increased linearly with increasing the concentration of the examined CDs, exhibiting A L -type phasesolubility diagrams, indicative of the formation of highly-soluble complexes of presumed 1:1 stoichiometry, with the only exception being γCD, where a B S -type phase-solubility diagram was instead observed, an index of the formation of a poorly soluble complex [39]. The apparent 1:1 stability constant of the complexes and the solubilizing efficiency values of the different CDs towards BTB are collected in Table 1. As it appears evident, among the natural CDs, βCD emerged as the most effective complexing and solubilizing agent for BTB, despite its lower water solubility, suggesting that its cavity has the most suitable size to host the drug molecule. This result was confirmed also in the case of the examined CD derivatives, where HPβCD was a more effective partner than HPαCD. Moreover, it can be observed that the stability constants of the complexes with all the examined βCD-derivatives were distinctly higher than that of the parent βCD, suggesting that the presence of hydroxypropyl and even more of methyl or sulfobutyl groups promoted the BTB inclusion, by expanding the hydrophobic region of the CD cavity, and enhancing the substrate binding through a hydrophobic effect. The stability constant values of the complexes were in the order RAMEB > SBEβCD >> HPβCD Pharmaceutics 2021, 13, 872 7 of 17 > βCD >> HPαCD > αCD >> γCD. The same rank order was observed also as for their solubilizing efficiency towards BTB (Table 1). 23.6 * ratio between drug solubility in the presence of a given CD concentration, and solubility of drug alone (0.86 mM); § not determinable (βCD solubility < 25 mM). As it appears evident, among the natural CDs, βCD emerged as the most effective complexing and solubilizing agent for BTB, despite its lower water solubility, suggesting that its cavity has the most suitable size to host the drug molecule. This result was confirmed also in the case of the examined CD derivatives, where HPβCD was a more effective partner than HPαCD. Moreover, it can be observed that the stability constants of the complexes with all the examined βCD-derivatives were distinctly higher than that of the parent βCD, suggesting that the presence of hydroxypropyl and even more of methyl or sulfobutyl groups promoted the BTB inclusion, by expanding the hydrophobic region of the CD cavity, and enhancing the substrate binding through a hydrophobic effect. The stability constant values of the complexes were in the order RAMEB > SBEβCD >> HPβCD > βCD >> HPαCD > αCD >> γCD. The same rank order was observed also as for their solubilizing efficiency towards BTB (Table 1). Solid-State Characterization of Drug-CD Systems Based on the above results, RAMEB was chosen for further studies as the best carrier for BTB. The method used for preparing drug-CD solid systems can strongly influence the performance of the obtained products in terms of physicochemical properties and dissolution behavior [42,43]. Therefore, the effectiveness of different techniques (kneading, coevaporation, cogrinding, and freeze-drying) for the preparation of solid equimolar BTB-RAMEB systems was evaluated, in order to individuate the preparation method that is able to give the product the best properties. Simple drug-CD physical mixtures were * ratio between drug solubility in the presence of a given CD concentration, and solubility of drug alone (0.86 mM); § not determinable (βCD solubility < 25 mM). Solid-State Characterization of Drug-CD Systems Based on the above results, RAMEB was chosen for further studies as the best carrier for BTB. The method used for preparing drug-CD solid systems can strongly influence the performance of the obtained products in terms of physicochemical properties and dissolution behavior [42,43]. Therefore, the effectiveness of different techniques (kneading, coevaporation, cogrinding, and freeze-drying) for the preparation of solid equimolar BTB-RAMEB systems was evaluated, in order to individuate the preparation method that is able to give the product the best properties. Simple drug-CD physical mixtures were also prepared for comparison purpose. The solid-state properties of the obtained products were investigated by DSC, XRPD, and FT-IR analyses ( Figure 2). BTB exhibited a thermal behavior typical of a pure, crystalline, anhydrous substance, showing a sharp melting endotherm peaked at 57.7 • C (∆H 112.2 J/g) (Figure 2A). The thermal profile of RAMEB was instead typical of an amorphous, hydrated substance, as it was characterized by an intense and broad endothermal band, ranged between 60-100 • C, due to the sample dehydration process. The presence of the drug melting peak (even if slightly broadened and shifted at lower temperature, as a consequence of the mixing with the amorphous partner) was evident in the DSC curve of their equimolar PM, thus indicating the absence of appreciable solid-state interactions between the components. On the contrary, only a residual trace of the drug melting endotherm was detected in the KN product (as indicated by the arrow in Figure 2A), and it completely disappeared in the GR, COE, and COL products. This finding, attributable to inclusion complex formation and/or complete BTB amorphization, is certainly indicative of stronger drug-carrier interactions brought about by the preparation techniques utilized. BTB exhibited a thermal behavior typical of a pure, crystalline, anhydrous substance, showing a sharp melting endotherm peaked at 57.7 °C (H 112.2 J/g) (Figure 2A). The thermal profile of RAMEB was instead typical of an amorphous, hydrated substance, as it was characterized by an intense and broad endothermal band, ranged between 60-100 °C, due to the sample dehydration process. The presence of the drug melting peak (even if slightly broadened and shifted at lower temperature, as a consequence of the mixing with the amorphous partner) was evident in the DSC curve of their equimolar PM, thus indicating the absence of appreciable solid-state interactions between the components. On the contrary, only a residual trace of the drug melting endotherm was detected in the KN product (as indicated by the arrow in Figure 2A), and it completely disappeared in the GR, COE, and COL products. This finding, attributable to inclusion complex formation and/or complete BTB amorphization, is certainly indicative of stronger drug-carrier interactions brought about by the preparation techniques utilized. The XRPD results ( Figure 2B) were in full agreement with those of DSC analysis, allowing us to exclude any possible analytical artefact due to the sample heating during the thermal analysis. In fact, the XRPD spectrum of BTB-RAMEB PM showed the presence of the typical crystallinity peaks of the drug, which clearly emerged from the amorphous pattern of the carrier, thus confirming the absence of interactions between the components. Only a residual crystallinity peak at about 17° 2 was instead observed in the case of the KN product, whereas a completely amorphous pattern was observed in the case of GR, COE, and COL products, proving the actual drug loss of crystallinity up to complete amorphization and/or inclusion complexation within the cavity of the amorphous partner. FT-IR analyses ( Figure 2C) were also performed as a complement to DSC and XRPD studies. The comparison of the FT-IR spectrum of pure BTB with those of its different equimolar systems with RAMEB can provide some further insight about solid state interactions between the components. Spectra were recorded in the entire 4000-400 cm −1 range, but the most important differences were observed in the zone of the carbonyl stretching, as highlighted by the frame in Figure 2C. In fact, spectra of KN, GR and COE products The XRPD results ( Figure 2B) were in full agreement with those of DSC analysis, allowing us to exclude any possible analytical artefact due to the sample heating during the thermal analysis. In fact, the XRPD spectrum of BTB-RAMEB PM showed the presence of the typical crystallinity peaks of the drug, which clearly emerged from the amorphous pattern of the carrier, thus confirming the absence of interactions between the components. Only a residual crystallinity peak at about 17 • 2Θ was instead observed in the case of the KN product, whereas a completely amorphous pattern was observed in the case of GR, COE, and COL products, proving the actual drug loss of crystallinity up to complete amorphization and/or inclusion complexation within the cavity of the amorphous partner. FT-IR analyses ( Figure 2C) were also performed as a complement to DSC and XRPD studies. The comparison of the FT-IR spectrum of pure BTB with those of its different equimolar systems with RAMEB can provide some further insight about solid state interactions between the components. Spectra were recorded in the entire 4000-400 cm −1 range, but the most important differences were observed in the zone of the carbonyl stretching, as highlighted by the frame in Figure 2C. In fact, spectra of KN, GR and COE products exhibited a shift of the BTB carbonyl band towards higher frequencies (from 1683 to 1702 cm −1 ), as a consequence of its interactions with the RAMEB molecules. Moreover, the complete disappearance of this same band, as well as of those at 1636 and 1597 cm −1 , was found in the COL product, suggesting a more intimate interaction of BTB with the carrier, and/or a more complete sample amorphization obtained by this preparation technique. On the contrary, all the main BTB absorption bands, including that of the carbonyl band, appeared almost unchanged in the PM spectrum, confirming the absence of interactions between the components after their simple mixing. Dissolution Studies of BTB-RAMEB Systems The results of dissolution studies of BTB, alone and from its various binary systems with RAMEB, are presented in Figure 3 and Table 2. cm −1 ), as a consequence of its interactions with the RAMEB molecules. Moreover, the complete disappearance of this same band, as well as of those at 1636 and 1597 cm −1 , was found in the COL product, suggesting a more intimate interaction of BTB with the carrier, and/or a more complete sample amorphization obtained by this preparation technique. On the contrary, all the main BTB absorption bands, including that of the carbonyl band, appeared almost unchanged in the PM spectrum, confirming the absence of interactions between the components after their simple mixing. Dissolution Studies of BTB-RAMEB Systems The results of dissolution studies of BTB, alone and from its various binary systems with RAMEB, are presented in Figure 3 and Table 2. As can be observed, all the drug binary systems with RAMEB showed a clear improvement of BTB dissolution properties compared to the plain drug, but clear differences were detected among the products obtained with the different techniques. The increase in BTB dissolution rate observed with the simple PM can be attributed to the wetting effect of RAMEB towards the hydrophobic drug, as well as to the possible "in situ" formation of the complex. A more important enhancement in the drug dissolution behavior, both in terms of dissolution rate and total amount dissolved, was obtained with the KN and COE As can be observed, all the drug binary systems with RAMEB showed a clear improvement of BTB dissolution properties compared to the plain drug, but clear differences were detected among the products obtained with the different techniques. The increase in BTB dissolution rate observed with the simple PM can be attributed to the wetting effect of RAMEB towards the hydrophobic drug, as well as to the possible "in situ" formation of the complex. A more important enhancement in the drug dissolution behavior, both in terms of dissolution rate and total amount dissolved, was obtained with the KN and COE products, which gave rise to an about 18 and 27 times increase, respectively, of the amount of drug dissolved at 10 min, and to an about 12 and 17 times increase, respectively, of dissolution efficiency at 60 min (DE60). Even better results were obtained with GR and COL products, which allowed us to reach more than 92% of dissolved drug at the end of the test, with an about 38 times increase of the drug percentage dissolved at 10 min and an almost 23 times increase in DE60. The achieved drug aqueous concentration at equilibrium from these latter two systems was higher than 0.5% w/v, with a more than 20-fold increase with respect to drug alone. Furthermore, a good stability of the obtained solutions can be expected, since oversaturation levels were not achieved compared to the BTB solubility values resulting from phase-solubility studies (see Figure 1). These findings confirmed the strong influence of the drug-CD preparation method on the performance of the final product, and they indicate cogrinding and colyophilization as the best techniques for preparing BTB-RAMEB solid systems endowed with optimal dissolution properties. The BTB-RAMEB GR product was finally selected for loading into liposome, as co-grinding is an easier, faster, and less expensive preparation procedure compared to colyophilization. Development and Characterization of Liposomal Formulations Numerous scientific findings suggested that the permeation and carrier function of liposomes into the skin can be enhanced by the presence in the lipid bilayer of a charged surfactant, which increases the vesicle fluidity and deformability [22,23]. Moreover, the addition of a cationic surfactant can have the additional advantage to further promote the vesicle uptake into the skin, via ionic interaction with the negatively charged epithelial cells [44]. The absence of cytotoxicity of SA-containing transfersomes was proved in [45]. Previous studies performed by our research group showed that the addition in the vesicle bilayer of the cationic surfactant stearylamine (SA) enabled the increase of therapeutic efficacy, in terms of intensity and duration of action, of liposomal formulations bearing local anesthetics such as benzocaine, butamben, and prilocaine [30,31]. In particular, an optimized liposomal formulation was developed, consisting of a PC:CH:SA mixture at 5.5/1.0/1.5 molar ratios, which was "double-loaded" with 0.7% w/v free BTB in the lipid phase and 0.3% w/v BTB as complex with HPβCD in the aqueous phase, which corresponded to the saturation solubility of this complex [30]. In vivo studies on rabbits evidenced a significant increase (p < 0.05) in intensity and duration (from 40 to 60 min) of anesthetic effect of the "double-loaded" BTB liposomal formulation with respect to a corresponding formulation loaded with 1% plain BTB in the lipid phase. However, the limited solubility of the BTB-HPβCD complex not only did not allow us to adequately increase the drug concentration in the aqueous phase of the vesicles, but it also gave rise to a reduction of the drug entrapment efficiency with respect to the corresponding liposomes loaded with all of the plain drug (1%) in the lipid bilayer [30]. Therefore, in the present work, phase-solubility studies were performed on a variety of natural and derivative CDs in order to find the most effective complexing and solubilizing agent towards BTB. From these studies, RAMEB emerged as the best partner for the drug, with a more than 40% increase in solubilizing efficiency with respect to the previous partner HPβCD. The greater aqueous solubility of the BTB-RAMEB than the BTB-HPβCD complex enabled us to fully exploit the "double-loading" technique, and to prepare liposomes bearing 0.5% w/v drug in the aqueous phase-as a hydrosoluble complex with RAMEBand 0.5% w/v free drug in the bilayer. In addition, the very higher stability constant of the BTB-RAMEB complex with respect to that of BTB-HPβCD complex (10,460 vs. 1910 M −1 ) should ensure a greater vesicles stability, avoiding any possible problem of competition between the lipid vesicle components and BTB for interaction with the CD [35][36][37]. The safety of RAMEB as excipient in topical formulations, including ocular and nasal ones, was assessed in [46], and the suggested threshold to avoid risks of adverse effects appearance in ocular or nasal formulations is 5% or 10% w/v, respectively, i.e., amounts clearly higher than that used in the present formulations (<3% w/v). Otherwise, in the attempt of even further improving the performance of the liposomal BTB formulation, the effect of adding also sodium cholate (SC) as edge activator was evaluated, considering its proven ability in improving the vesicles' capacity to penetrate and effectively carry the drug through the skin [24,47]. The absence of cytotoxicity of SC-based transfersomes was shown [48]. Then, a liposomal formulation consisting of a PC:CH:SA:SC mixture at 5.5/1.0/1.5/1.0 molar ratios was also developed. In order to assess the actual advantages of the "double-loading" approach, both singleloaded (SL) liposomes containing 1% w/v drug in the lipophilic phase and double-loaded (DL) liposomes containing 0.5% w/v drug in the aqueous phase, as a hydrosoluble complex with RAMEB, and 0.5% w/v free drug in the bilayer, were prepared. The various liposomal formulations were evaluated for mean dimensions, polydispersity index (PDI), Z-potential, deformability, and encapsulation efficiency (Table 3). Table 3. Composition, particle size, polydispersity index (PDI), Z-potential, deformability and encapsulation efficiency (EE%) of butamben liposomes single-loaded (SL) with 1% w/v free drug in the lipid phase, or double-loaded (DL) with 0.5% w/v free drug in the lipid phase and 0.5% w/v in the aqueous phase as RAMEB complex. No important differences in size or PDI values were observed between double-loaded (DL) and the corresponding single-loaded (SL) liposomes (SL1 vs. DL1, or SL2 vs. DL2) (p < 0.05), or even between formulations containing stearylamine (SA) alone or in combination with sodium cholate (SC) (SL1 and DL1 vs. SL2 and DL2) (p < 0.05). All the examined liposomal dispersions exhibited good deformability properties, as proven by the values near the unit of the vesicle diameter ratio before and after extrusion, and this property should ensure good penetration and permeability abilities through the biological membranes. Regarding the surface charge of the vesicles, as expected, the presence of the anionic surfactant SC gave rise to a decrease of the positive Z-potential values, with respect to the formulations containing the cationic surfactant SA alone. On the other hand, the slightly lower Z-potential values of SL vesicles with respect to the corresponding DL ones could be due to the presence on their bilayer of more BTB molecules, whose amino group is unionized at physiological pH [49]. Finally, all formulations gave high EE% values, always higher than 90%. However, it is worth highlighting the important advantage achieved with respect to the previously developed liposomal formulation double-loaded with the BTB-HPβCD complex [30]. In fact, in the previous case, due to the limited solubility and stability of the BTB complex with HPβCD, the use of the double-loading approach gave rise to a significant decrease in drug EE% compared to the corresponding formulation containing all the plain drug in the liposomal bilayer [30]. On the contrary, in the present case, in virtue of the greater solubilizing and complexing power of RAMEB towards BTB, no decrease (DL1 vs. SL1) or even an increase (DL2 vs. SL2) in EE% values was observed for DL formulations compared to the corresponding SL formulations. Furthermore, EE% obtained with both DL formulations (93.8% for DL1 and 99.8% for DL2) was clearly higher than the 82% obtained with the previous DL formulation bearing the BTB-HPβCD complex [30]. Finally, the highest EE% value, which was near 100%, was found for the DL2 formulation, containing SA and SC in combination. Formul. Code The TEM and SEM analyses ( Figure 4) indicated that all the liposomal vesicles were of homogeneous dimensions and almost spherical shape and presented the typical multilamellar structure. Interestingly, neither appreciable changes in vesicle morphology nor the appearance of aggregation or vesicle destruction phenomena were detected in DL formulations compared to the corresponding SL liposomes, which did not contain RAMEB. This positive outcome may be attributed to the very high affinity of RAMEB for BTB, which avoided possible competition by the lipid components of the bilayer for interaction with this CD and allowed for the formation of stable liposomal vesicles [35,37]. of homogeneous dimensions and almost spherical shape and presented the typical multilamellar structure. Interestingly, neither appreciable changes in vesicle morphology nor the appearance of aggregation or vesicle destruction phenomena were detected in DL formulations compared to the corresponding SL liposomes, which did not contain RAMEB. This positive outcome may be attributed to the very high affinity of RAMEB for BTB, which avoided possible competition by the lipid components of the bilayer for interaction with this CD and allowed for the formation of stable liposomal vesicles [35,37]. Table 3 for liposomal formulation composition). Stability Studies of Liposomal Formulations A frequent occurrence observed during the storage of liposomal formulations-and is considered indicative of poor stability of the colloidal dispersion-is the increase in vesicle particle size as a consequence of aggregation and/or coalescence phenomena, which can also negatively affect their drug-carrier and drug release properties. The results of stability studies proved that for all the examined liposomal dispersions, their mean dimensions remained almost unmodified during 3 months storage at 4 °C. In fact, the observed changes in vesicle size were in all cases lower than 10% compared to the values of the corresponding freshly prepared samples. Moreover, the PDI values were stable during that time, indicating the maintenance of homogeneity, and only negligible zeta potential variations were observed, supporting the physical stability of the colloidal dispersion. Regarding instead the possible problems of drug leakage during storage, EE% values of SL formulations showed a decrease of more than 10% at the end of the storage period, dropping from 92.2 ± 3.8 up to 81.7 ± 2.9, and from 94.6 ± 2.6 up to 83.2 ± 3.1 for SL1 and SL2 formulations, respectively. On the contrary, in the case of DL formulations, the drug leakage phenomenon was clearly more limited, with a decrease of EE% values lower than Table 3 for liposomal formulation composition). Stability Studies of Liposomal Formulations A frequent occurrence observed during the storage of liposomal formulations-and is considered indicative of poor stability of the colloidal dispersion-is the increase in vesicle particle size as a consequence of aggregation and/or coalescence phenomena, which can also negatively affect their drug-carrier and drug release properties. The results of stability studies proved that for all the examined liposomal dispersions, their mean dimensions remained almost unmodified during 3 months storage at 4 • C. In fact, the observed changes in vesicle size were in all cases lower than 10% compared to the values of the corresponding freshly prepared samples. Moreover, the PDI values were stable during that time, indicating the maintenance of homogeneity, and only negligible zeta potential variations were observed, supporting the physical stability of the colloidal dispersion. Regarding instead the possible problems of drug leakage during storage, EE% values of SL formulations showed a decrease of more than 10% at the end of the storage period, dropping from 92.2 ± 3.8 up to 81.7 ± 2.9, and from 94.6 ± 2.6 up to 83.2 ± 3.1 for SL1 and SL2 formulations, respectively. On the contrary, in the case of DL formulations, the drug leakage phenomenon was clearly more limited, with a decrease of EE% values lower than 5%, passing from 93.8 ± 3.0 to 90.1 ± 2.9 (DL1) and from 99.8 ± 1.3 to 96.2 ±1.6 (DL2). These results proved the better efficiency of the double-loading technique in reducing the drug leakage from the liposomal vesicles, given that the portion of drug present in the internal aqueous phase as a hydrophilic CD complex is less prone to be prematurely released compared to the free drug present in the vesicle bilayer. In Vitro Drug Permeation Studies The drug apparent permeability coefficient (Kp), obtained by permeation studies across excised rabbit skin of BTB from the different liposomal dispersions either as such or Table 4; the related drug permeation profiles are shown in Figure 5A,B with the curve obtained from the drug solution in Carbopol gel. leakage from the liposomal vesicles, given that the portion of drug present in the internal aqueous phase as a hydrophilic CD complex is less prone to be prematurely released compared to the free drug present in the vesicle bilayer. In Vitro Drug Permeation Studies The drug apparent permeability coefficient (Kp), obtained by permeation studies across excised rabbit skin of BTB from the different liposomal dispersions either as such or formulated as Carbopol gel, are reported in Table 4; the related drug permeation profiles are shown in Figure 5A,B with the curve obtained from the drug solution in Carbopol gel. Table 3 for liposomal formulations composition). Table 3 for liposomal formulations composition). An initial lag-phase was observed, attributed to the time necessary to saturate the skin membrane; it was slightly longer for gel formulations, probably due to the presence of the polymeric network, which decreased somewhat the drug diffusion. All liposomal formulations, both as such ( Figure 5A) or formulated as gel ( Figure 5B), showed clearly better drug permeation properties (p < 0.001) than the drug solution, confirming the skin penetration ability and permeation-enhancing properties of elastic liposomal vesicles on drug delivery [22][23][24][25][26][27]. Gel liposomal formulations showed a slight, even though not significant (p > 0.05) reduction in the drug permeation rate with respect to the corresponding liposomal dispersions, which could be attributed, as in the case of the initially longer lag-time phase, to the presence of the gel network, which slowed down the drug diffusion rate. As can be observed, DL liposomes exploiting the "drug-in CD-in liposome" approach showed significantly better (p < 0.05) drug permeation profiles than the corresponding SL ones. A role of RAMEB as a skin permeation enhancer could be hypothesized to explain this result. Similar findings have been reported in the case of microemulsions or suspensions, where the RAMEB addition to the formulations allowed for a significant improvement in drug permeation properties through excised animal membranes, attributed to the RAMEB's ability to reversibly remove lipids form the stratum corneum, thus temporarily reducing its barrier effect [29,50]. Moreover, a better performance of DL2 and SL2 liposomal formulations than the corresponding DL1 and SL1 ones was observed, even though it was of borderline statistical significance (p ≈ 0.05). This effect could be explained with the joined presence in the first ones of SA and SC, which probably enabled a more efficient drug permeation. In Vivo Studies The results of in vivo studies, presented in Figure 6, proved that all the liposomal gel formulations gave rise to a significant (p << 0.05) increase in both intensity and duration of action in comparison with the gel containing the drug hydroalcoholic solution, whose anesthetic effect was significantly different from the respective blank control for only 30 min. Both the drug loading mode, as well as the presence or absence in the vesicle bilayer of SC together with SA, played a role in the performance of the liposomal formulations. In fact, formulations exploiting the double-loading approach showed a duration of action significantly longer (p < 0.05) than the corresponding single-loaded formulations, allowing for prolonging the anesthetic effect up to 75 min (DL1) or even up to 90 min (DL2), with respect to 40 min and 60 min obtained with the corresponding SL1 and SL2 formulations. The better performance of DL formulations than the SL ones can be explained by the combined presence of a free drug fraction in the external bilayer, which provided a rapid initial effect, and of a drug fraction in the internal vesicle core, in the form of a complex with RAMEB, which ensured a more prolonged drug release profile. Table 3 for liposomal formulations composition Moreover, the anesthetic effect provided by DL1 formulation was more prolonged even than that obtained with the previously developed DL formulation of similar composition, but containing BTB as HPβCD complex (75 vs. 60 min) [29], thus confirming the important role played by the CD type used for drug complexation in affecting its release properties from the liposomal vesicles [16,35,36]. This result can be in fact ascribed to both Table 3 for liposomal formulations composition). Six rabbits per group. Mean of six experiments. The control consisted of a Carbopol gel containing the corresponding formulation without drug. Note:ˆp < 0.05 vs. drug solution; * p < 0.05 vs. SL formulation; # p < 0.05 vs. DL1 formulation. Moreover, the anesthetic effect provided by DL1 formulation was more prolonged even than that obtained with the previously developed DL formulation of similar composition, but containing BTB as HPβCD complex (75 vs. 60 min) [29], thus confirming the important role played by the CD type used for drug complexation in affecting its release properties from the liposomal vesicles [16,35,36]. This result can be in fact ascribed to both the higher solubilizing effect of RAMEB towards BTB, which allowed us to increase the drug loading in the aqueous core of the vesicles, as well as to achieve a higher stability of the BTB-RAMEB complex, which in turn enabled a better control of the drug release. Regarding instead the greater effectiveness of formulations containing SA in combination with SC (DL2 vs. DL1 and SL2 vs. SL1), it can be explained by the enhancer effect of this last edge activator, which probably favored a better penetration through the skin of the liposomal vesicles [51]. On the other hand, a possible synergistic effect between the cationic surfactant SA and the anionic surfactant SC in increasing skin permeability could be also hypothesized, as observed by other authors [52]. Conclusions A new effective liposomal formulation of BTB endowed with enhanced skin delivery and increased duration of the drug anesthetic effect was successfully developed. RAMEB, selected by preliminary phase-solubility studies as the best complexing agent for BTB, was utilized to develop deformable liposomes double-loaded with 0.5% plain drug in the vesicle external bilayer and 0.5% drug as a hydrophilic complex with RAMEB in the internal aqueous core. The use of RAMEB for BTB complexation revealed to be essential for improving the performance of the liposomal formulations, which showed to be clearly superior with respect to a previously developed formulation based on the HPβCD-BTB complex [30], thus evidencing the importance of the proper choice of the CD type used for drug complexation in order to take maximum advantage of the "double loading" strategy. In fact, the greater solubility and the very higher stability of the BTB-RAMEB than the BTB-HPβCD complex made it possible not only to increase the drug entrapment efficiency of the vesicles but also to obtain a more controlled and prolonged drug release, as proven by the significant increase in duration of BTB anesthetic effect obtained in in vivo experiments. The superior properties of double-loaded (DL) liposomes than the corresponding single-loaded (SL) ones, particularly in terms of greater stability towards drug leakage during storage and of better control of drug release, with a consequent more prolonged drug anesthetic effect in vivo, was also proven. Moreover, the influence of the edge activator type on the performance of deformable liposomes was confirmed. In particular, DL-liposomes containing the combination of SA and SC emerged as the best formulation, showing not only the highest entrapment efficiency (99%) but also the longest duration of drug anesthetic effect in vivo (90 min). Finally, a strong correlation between in vitro BTB permeation experiments through excised rabbit ear skin and in vivo anesthetic efficacy was found, indicating that in vitro studies could be useful in preclinical tests for selecting the most efficient formulation to subject to in vivo studies.
Synthesis of aromatic amino acids from 2G lignocellulosic substrates Summary Pseudomonas putida is a highly solvent‐resistant microorganism and useful chassis for the production of value‐added compounds from lignocellulosic residues, in particular aromatic compounds that are made from phenylalanine. The use of these agricultural residues requires a two‐step treatment to release the components of the polysaccharides of cellulose and hemicellulose as monomeric sugars, the most abundant monomers being glucose and xylose. Pan‐genomic studies have shown that Pseudomonas putida metabolizes glucose through three convergent pathways to yield 6‐phosphogluconate and subsequently metabolizes it through the Entner–Doudoroff pathway, but the strains do not degrade xylose. The valorization of both sugars is critical from the point of view of economic viability of the process. For this reason, a P. putida strain was endowed with the ability to metabolize xylose via the xylose isomerase pathway, by incorporating heterologous catabolic genes that convert this C5 sugar into intermediates of the pentose phosphate cycle. In addition, the open reading frame T1E_2822, encoding glucose dehydrogenase, was knocked‐out to avoid the production of the dead‐end product xylonate. We generated a set of DOT‐T1E‐derived strains that metabolized glucose and xylose simultaneously in culture medium and that reached high cell density with generation times of around 100 min with glucose and around 300 min with xylose. The strains grew in 2G hydrolysates from diluted acid and steam explosion pretreated corn stover and sugarcane straw. During growth, the strains metabolized > 98% of glucose, > 96% xylose and > 85% acetic acid. In 2G hydrolysates P. putida 5PL, a DOT‐T1E derivative strain that carries up to five independent mutations to avoid phenylalanine metabolism, accumulated this amino acid in the medium. We constructed P. putida 5PLΔgcd (xylABE) that produced up to 250 mg l−1 of phenylalanine when grown in 2G pretreated corn stover or sugarcane straw. These results support as a proof of concept the potential of P. putida as a chassis for 2G processes. Introduction Society is in favour of replacing polluting fossil fuels (the main source of energy), and oil-derived chemicals with alternative renewable sources to combat global climate change (Ragauskas et al., 2006;Valdivia et al., 2016). A variety of new green energy sources are currently being used (bio-, eolic, thermosolar and photovoltaic energy); however, the sole and main alternative for oil-derived chemicals today is a new 'green chemistry' based on plants as the only sustainable and renewable source of organic carbon on earth. This so-called new chemistry is expected to move the organic chemical industry towards net zero emissions (Ragauskas et al., 2006(Ragauskas et al., , 2014Linger et al., 2014;Isikgor and Becer, 2015;Beckham et al., 2016;Ramos et al., 2016;Duque and Ramos, 2019). At present, the major source of plant-derived materials to produce a number of chemicals is starch, whose hydrolysis yields glucose, that in turn is converted into different chemicals through fermentation processes. The use of starch to obtain biofuels and biochemical commodities is, however, controversial because of the overlap with the food chain. This controversy provoked a shift at the end of the last century when scientists began working on replacing starch by non-edible plant biomass such as lignocellulose, an attractive source of raw material due to its abundance and because it does not compete directly with foodstuffs. However, the use of lignocellulosic materials to produce bioethanol and biochemicals is challenging because biomass requires intensive pretreatment (mainly physico-chemical) to deconstruct it and for the release of cellulose, hemicellulose and lignin (Alc antara et al., 2016). Sugars can be obtained from cellulose and hemicellulose in a process called saccharification, which involves enzymatic hydrolysis by a set of enzymes generically known as cellulases that work synergistically to produce monomeric sugars ( € Ohgren et al., 2007;Alvarez et al., 2016). Glucose (about 57-70%) is the major product that results from the hydrolysis, followed by xylose (about 9-23%), and other minority sugars such as arabinose, rhamnose and galactose (van Maris et al., 2006;Rocha-Mart ın et al., 2018;see Table S1 for data from Rocha-Mart ın et al., 2018). However, many industrially relevant microorganisms only ferment glucose and leave the other sugars untransformed. Technoeconomic studies by Valdivia et al. (2016Valdivia et al. ( , 2020 revealed that the processing of the C5 xylose is a must in order to profitably produce bioethanol or other chemicals from biomass. A number of xylose catabolic pathways have been engineered for heterologous expression in industrially relevant microorganisms. Recombinant xylose-utilizing strains of yeasts (Garc ıa-Sanchez et al., 2010) and various bacteria like Zymomonas mobilis (Zhang et al., 1995), Corynebacterium glutamicum (Kawaguchi, et al., 2006), Bacillus subtilis (Chen et al., 2013;Zhang et al., 2016), Kluyveromyces marxianus (Suzuki et al., 2019) and Pseudomonas putida (Meijnen et al., 2008(Meijnen et al., , 2009Le Meur et al., 2012;Dvor ak and de Lorenzo, 2018;Bator et al., 2020) have been constructed and can be considered as a first step towards efficient biomass utilization. An aim of synthetic biology is to establish a universal platform for the synthesis of 'all chemicals'. We are, however, far from that objective today and several microbial platforms based on recombinant microbes are being used to produce different chemicals (Calero and Nikel, 2019;Liu and Nielsen, 2019). Bioprocess yields depend on the metabolic route and on the stoichiometry of the pathways, as well as on the intrinsic properties of the chemicals and the microbial tolerance to the high concentrations of substrates and products that are demanded during the industrial production of commodities (Bator et al., 2020). We have focused our attention on the production of aromatic chemicals from sugars. To this end, we have chosen P. putida DOT-T1E as a production platform, because of its high resistance to a wide range of aromatic compounds, including aromatic hydrocarbons such as toluene, xylenes and styrene (Rojas et al., 2003;Ramos et al., 2015). This property confers significant advantage over other P. putida strains that are only moderately tolerant to solvents, for instance KT2440 (Segura et al., 2012). DOT-T1E is a non-pathogenic strain of Pseudomonas putida. The pangenome of the species revealed a set of core genes common to all strains and which define the basic physiological properties of these microorganisms. The species has also a set of accessory genes that are present in only two or more but not all the strains of the species, which confer specific properties (e.g. biodegradation of lineal and/or aromatic hydrocarbons), and the ability to colonize different niches . The pangenome showed that the species P. putida is characterized by a limited ability to consume sugars, that is glucose, gluconate and fructose, which are mainly metabolized through the Entner-Doudoroff pathway (Nelson et al., 2002;del Castillo et al., 2007;Daddaoua et al., 2009;Daniels et al., 2010;Tiso et al., 2014;Calero and Nikel, 2019). KEGG analysis revealed that P. putida DOT-T1E, similarly to other strains of the species, contains a set of genes that would allow xylose catabolism if different peripheral modules are added to transform xylose into central metabolic intermediates. In fact, Bator et al. (2020) described that up to three peripheral xylose pathways can be implemented in P. putida KT2440 to allow growth on xylose. These pathways are known as the isomerase pathway and the oxidative Dahms and Weimberg pathways (Meijnen et al., 2008(Meijnen et al., , 2009Bator et al., 2020). Bator et al. (2020) analysed the biotransformation of xylose into 14 different chemicals by recombinant P. putida strains and showed that the maximal product yields of 12 of the 14 metabolites of xylose were produced by the isomerase pathway. In this metabolic pathway, xylulose-5-phosphate is part of the pentose phosphate cycle and is eventually transformed into erythrose-4-phosphate, a starting metabolite for the synthesis of aromatic amino acids via the shikimate pathway (Fig. 1). The isomerase pathway is characterized by the presence of a xylose isomerase (XylA) that transforms xylose into xylulose and a xylulokinase (XylB) that subsequently phosphorylates the latter to xylulose-5-phosphate, which enters into the Pentose Phosphate (PP) cycle (Wilhelm and Hollenberg, 1985;Amore et al., 1989;Mishra and Singh, 1993;Hahn-H€ agerdal and Pamment, 2004). The xylA and xylB genes from the isomerase pathway of E. coli have already been engineered into KT2440 and S12 strains to allow the use of xylose (Le Meur et al., 2012;Dvor ak and de Lorenzo, 2018). In both strains, the efficient use of xylose as a C source requires the inactivation of glucose dehydrogenase (Gcd) to avoid the misrouting of xylose to dead-end xylonate. While S12 was able to grow on xylose upon incorporation of xylAB (Meijnen et al., 2008), Dvor ak and de Lorenzo (2018) and Elmore et al. (2020) described that KT2440 requires, in addition to xylAB, the xylE gene encoding a protoncoupled symporter to facilitate the entry of xylose into the cell. This study shows that the isomerase pathway together with the xylE gene can be expressed in both P. putida DOT-T1E wild-type strain and its glucose dehydrogenase (gcd) mutant derivatives. The recombinant strains not only grew on xylose in minimal medium, but also on corn stover and sugarcane straw 2G hydrolysates, consuming >98% glucose and >96% xylose. Furthermore, P. putida 5PL, a mutant derivative of DOT-T1E that overproduces L-phenylalanine (Molina-Santiago et al., 2016), was also engineered to inactivate the gcd gene and transformed with pSEVA633_xylABE. The resulting 5PLDgcd (xylABE) strain produced L-phenylalanine from 2G hydrolysates confirming the potential of Pseudomonas putida as a chassis to produce value-added goods from agricultural residues. Results Construction of Pseudomonas putida DOT-T1E derivatives that use xylose and produce L-phenylalanine Pseudomonas putida DOT-T1E (Table 1) does not use xylose as a carbon source (Table 2); however, when DOT-T1E is incubated with xylose, it consumes the sugar and stoichiometrically converts it into the dead- Fig. 1. Proposed synthesis of L-phenylalanine (L-Phe) from glucose and xylose in an engineered P. putida Dgcd (xylABE) strain. Exogenous xylose transporter (blue) and xylose isomerase pathway enzymes (green) are highlighted. Green arrows indicate the exogenously supplemented xylose isomerase pathway. Brown arrows indicate native pentose phosphate pathway reactions. Dotted black arrows indicate native shikimate pathway reactions. Dotted red arrows indicate the glucose dehydrogenase (gcd) deleted reaction. Abbreviations: PQQ, pyrroloquinoline; G6P, D-glucose-6-phosphate; 6PG, 6-phosphogluconate; KDPG, 2-dehydro-3-deoxyphosphogluconate; G3P, D-glyceraldehyde 3-phosphate; FBP, fructose-1,6-bisphosphate; F6P, fructose-6-phosphate; X5P, xylulose-5-phosphate; D-Ri5P, L-ribulose-5-phosphate; R5P, ribose-5-phosphate; S7P, sedoheptulose-7-phosphate; E4P, erythrose 4-phosphate; PEP, phosphoenolpyruvate; DAHP, 3-deoxy-D-arabinoheptulosonate 7-phosphate; DHQ, 3-dehydroquinate; DHS, 3-dehydroshikimate; SHIK, shikimate; SH3P, shikimate 3-phosphate; EP5P, 5-enolpyruvoylshikimate 3-phosphate; CA, chorismate; PA, prephenate; PP, phenylpyruvate; Glu, glutamate; 2OG, 2-oxoglutarate; L-Phe, L-Phenylalanine. tg, generation time expressed in min; Y, yield expressed as g cells g À1 sugar consumed within 24 h; Q, sugar consumption rate expressed in g l À1 Áh À1 . For glucose + xylose assay the first value correspond to glucose consumption while the second one corresponds to xylose consumption rate; n.g, no growth; n.d, not determined. Growth conditions and analytical determination are described in detail under Experimental procedures. The data in the Fig. S1). The transformants were spread on M9 minimal medium plates supplied with gentamycin and glucose or xylose as a C source; no transformants able to grow on xylose were found, while the rate of gentamycin resistant (Gm R ) transformants on glucose was 10 4 clones per µg DNA. None of the glucose selected clones were able to grow on xylose in spite of long-term incubation. This result contrasts with those observed in P. putida S12 bearing the xylAB genes, which acquired xylose degradation capacity upon enrichment in xylose (Meijnen et al., 2008). However, they are in line with the results observed in KT2440, which revealed that the mere incorporation of the catabolic genes was not sufficient to allow growth of this strain on xylose (Dvor ak and de Lorenzo, 2018). These authors showed that, in addition, a xylose symporter was needed to facilitate uptake of xylose and its metabolism (Dvor ak and de Lorenzo, 2018; Elmore et al., 2020;Espeso et al., 2021). To address this, we subcloned the synthetic xylABE operon designed by Dvor ak and de Lorenzo (2018) into the Gm R pSEVA633 plasmid. (This construction was made to use a Gm resistant cassette compatible with the Kmresistant marker present in the Dgcd strains). DOT-T1E and DOT-T1EDgcd bearing xylABE constructs were selected on Gm containing M9 minimal plates with glucose as the sole C-source. These clones grew when replicated onto Gm containing M9 minimal plates with xylose, although at a slower rate than on glucose. We then used liquid culture medium to compare the growth rates, growth yields and sugar consumption rates of DOT-T1E, DOT-T1EDgcd, DOT-T1E (xylABE) and DOT-T1EDgcd (xylABE) on glucose, xylose and mixtures of glucose and xylose (Table 2). We found that growth of DOT-T1E and DOT-T1EDgcd on minimal medium with glucose was similar, reaching a turbidity at 660 nm of around 4 and~10 10 CFU ml À1 . Doubling times were about 100 min (i.e. see Fig. S2), and yields of both strains on glucose were in the order of 0.35 to 0.36 g dry weight g À1 glucose consumed (Table 2). This indicated that the lack of the gcd gene did not significantly affect the utilization of glucose in these strains. In fact, the rate of carbon consumption of both strains was 0.25 g l À1 h À1 . The two strains bearing the xylABE genes grew on glucose at a similar rate (101-105 min doubling time) to the strains without pSEVA633_xylABE, suggesting that the xylABE gene load does not influence growth rate. On xylose, DOT-T1E (xylABE) and DOT-T1EDgcd (xylABE) strains grew to reach 10 9 to 10 10 CFU ml À1 , with doubling times of about 300 min (Table 2), and yields in the range of 0.32 to 0.40 g dry weight g À1 xylose consumed. The rate of xylose consumption was around 0.22 g l À1 h À1 . Similarly to other P. putida strains in which the xyl isomerase pathway had been engineered, our recombinant strains also metabolized xylose via the PP cycle. Elmore et al. (2020) showed that increased expression levels of two genes (tal and tkt) were needed for efficient channelling of xylose through the PP cycle in KT2440. However, this might not be the case with DOT-T1E, because high growth rates, high xylose consumption rates and high yields were obtained without exogenous increases in tal and tkt expression. The growth parameters and C consumption rates we achieved with DOT-T1E derivatives were slightly higher than those described for KT2440 engineered to use xylose (Dvor ak and de Lorenzo, 2018). As the major sugars in 2G hydrolysates are glucose and xylose, we compared the growth characteristics on glucose plus xylose (at a ratio 3:1) of DOT-T1E and DOT-T1EDgcd with and without xylABE genes ( Table 2). As expected, with mixtures of glucose and xylose, all clones grew with similar doubling times (Table 2). For the two clones that used xylose as a C-source, we found that cultures reached stationary phase after 24 h without apparent diauxic growth. In fact, time course glucose and xylose consumption with the DOT-T1E (xylABE) and DOT-T1EDgcd (xylABE) strains revealed simultaneous utilization of both sugars from the beginning, with glucose assimilation rates of around 0.17 mmol l À1 h À1 for both strains while xylose was assimilated at a rate of 0.06 g l À1 h À1 . Therefore, C5 sugar utilization was not under carbon catabolite control in the recombinant P. putida strains. The yields of the strains using glucose and xylose were in the range of 0.29 to 0.39 g dry weight per g sugar consumed (Table 2). Previous comparative analysis based on yields and flux balances in Pseudomonas strains indicated that the best pathway for synthesis of added value chemicals is dependent on the stoichiometry of the reactions (Bator et al., 2020) and these authors concluded that the isomerase pathway is the most favourable. We have previously described that the DOT-T1E 5PL mutant strain was able to produce L-phenylalanine from glucose (Molina-Santiago et al., 2016). The strain was engineered to block phenylalanine catabolism and bore a pheA br mutation. The pheA br allele encodes a variant of the enzyme that catalyses the Claysen rearrangement of chorismate to prephenate and its ulterior conversion into phenylpyruvate, the first step in phenylalanine biosynthesis, the mutant PheA br is not susceptible to feed-back inhibition due to accumulation of phenylalanine. Then, strain 5PL was considered a potential useful chassis for aromatic amino acid production from 2G sugars. To explore this, we introduced the xylABE operon into 5PL and its isogenic Dgcd mutant. We tested growth of 5PL (xylABE) and 5PLDgcd (xylABE) on glucose, xylose and a mixture of both sugars ( cell densities of 10 9 -10 10 CFU ml À1 , with doubling times in the range of 100-115 min, and growth yields of 0.25-0.39 g dry weight per g glucose, respectively (Table 2). Carbon consumption rates of 5PL (xylABE) and 5PLDgcd (xylABE) with xylose were similar to those achieved by equivalent constructions in DOT-T1E, even though they only grew linearly, but reaching cell densities in the order of 10 9 -10 10 CFU ml À1 . Growth rates, growth yields and C-consumption rates in the presence of glucose and xylose were similar to those measured with only glucose. We analysed L-phenylalanine production from glucose and xylose by 5PL Dgcd (xylABE) and found that production was higher with glucose (225 ppm) than with xylose (10 ppm), but that the amount of L-phenylalanine produced with mixtures of both sugars was similar to that measured with glucose ( Table 3). The limited production of L-phenylalanine from xylose arose not only from the limited uptake of the carbon source, but also from limited synthesis of shikimate, because exogenous addition of 1 mM shikimate to a xylose metabolizing strain enhanced L-phenylalanine production to 75 ppm (Table 3). In agreement with this series of results, the level of Lphenylalanine produced with glucose plus xylose was lower than with only glucose and approached those reached with only glucose (Table 3). This supports the idea that although xylose is directly channelled into the PP cycle, the level of PP cycle intermediates was higher with glucose than with xylose. We envisaged that we would see improvements in the L-phenylalanine production pathway through manipulation of aro genes, as has been described for L-phenylalanine production in E. coli (Liu et al., 2018) and aromatic derivatives in P. putida (Loeschcke and Thies, 2020;Schwanemann et al., 2020). In any case, our set of results support that P. putida bearing the xylABE genes is a potentially useful platform for metabolism and biotransformation of 2G sugars into added value chemicals. Growth of Pseudomonas putida DOT-T1E strain on 2G substrates derived from corn stover and sugarcane straw Because the ultimate aim of this work is to explore the potential of Pseudomonas putida as a platform to produce added value chemicals from 2G substrates, we decided to test the behaviour of the set of strains constructed in this study in hydrolysates of 2G lignocellulosic material upon saccharification. The lignocellulosic material we choose was pretreated acid-diluted and steam exploded corn stover (PCS) and sugarcane straw (PSCS). PCS and PSCS were prepared at the York (Iowa, USA) laboratory of Abengoa Bioenergy. Figure 2 top shows the structure of a corn stover and sugarcane stalk with a set of fibrils in the centre that includes the xylem and phloem tubes, characteristic of these plants. Upon steam explosion in acidic medium, the cell material was found to be disorganized and long fibres appeared (Fig. 2 bottom). The composition for PCS (Table S1) is about 35% glucan as cellulose, 7.8% xylanhemicellulose, 3% soluble glucose/cellobiose and 16% soluble xylose (Rocha-Mart ın et al., 2018). For PSCS, the composition was glucan-cellulose 36%, 4.5% xylanhemicellulose, 3% soluble glucose/cellobiose and 8.2% soluble xylose (Rocha-Mart ın et al., 2018). To release monomeric sugars from the polymers, a suspension of 10% (w/v) of the pretreated lignocellulosic materials was digested for 24 h with different amounts of the commercial cellulase cocktail Viscozyme (Sigma-Merck) at pH 5.5-6.0 in 1xM9 solution incubated at 50°C with moderate agitation (50-70 rpm in an orbital shaker). Upon digestion with Viscozyme, we found that the highest glucose yield was about 6.7 g glucose l À1 and 3 g xylose l À1 in the case of PCS, and 5.8 g kg À1 glucose and 1.8 g xylose l À1 in the case of PSCS (see Tables S2 and S3). The hydrolysates were neutralized to pH 7 with phosphate buffer and used undiluted or diluted to 5%, 3% and 1% substrate as a C-source. These suspensions were inoculated with DOT-T1E, DOT-T1E (xylABE), DOT-T1EDgcd and DOT-T1EDgcd (xylABE) to reach an initial cell density of 10 6 -10 7 CFU ml À1 . We found that at a load of 10% of 2G hydrolysates from PCS or PSCS, none of the strains grew in this medium; however, with lower substrate loads, cell density in the 2G PCS and PSCS hydrolysates reached between 3x10 8 and 7x10 9 CFU ml À1 . The results shown in Table 4 reveal that with a substrate charge of 3% (w/v) PSC or PSCS, all strains were able to consume more than 99% of glucose and more than 70% of the xylose in 24 h (>96% in 48 h), except for Growth conditions and analytical determination are described in detail under Experimental procedures. The data are the average of at least two independent assays. Phenylalanine concentration in the culture medium was determined at the start of the assay (concentration was negligible in all cases) and after 24 h incubation (given values in the Table). Standard deviations are below 15% of the given values. DOT-T1EDgcd, as expected, that did not consume xylose. Elmore et al. (2020) reported that a KT2440 derivative carrying the xylABE genes was able to use xylose but did not assimilate arabinose, this contrasted with the results in S12 that uses xylose and arabinose upon acquisition of xylAB genes and adaptation to growth on xylose. We have not analysed arabinose consumption by T1E derivatives in detail, but in our 2G hydrolysates assays, we determined the levels of arabinose before and after growth of six recombinant strains and found that arabinose levels remain unchanged. In addition, we determined if any of the strains inoculated in 2G hydrolysates would consume acetic acid present at concentration of 200-260 mg l À1 . We found that >85% of the initial acetic acid concentration was removed. This set of results suggests that the 2G hydrolysates from pretreated lignocellulosic material can be used for growth of Pseudomonas on 2G materials. Similar assays to those described above with the DOT-T1E (xylABE) and its isogenic Dgcd mutant were carried out with the 5PL (xylABE) and 5PLDgcd (xylABE) strains. We found that the strains grew to reach cell densities of 2 to 3 9 10 8 CFU ml À1 and accumulated up to 200-250 ppm L-phenylalanine in the hydrolysed supernatants (Table 5). 2G cell factories and Pseudomonas The increase in food prices due to the use of cereal grain in the production of so-called first-generation (1G) biofuels (including ethanol) led to the search of new sources of sugars for bioethanol production. The new source of raw material for the so-called 2G technology is biomass, whose abundance allows for sufficient global feedstock, that is in the United States alone, biomass could reach more than 450 Mdryton/year and this amount, in terms of bioethanol equivalence, is 67 Ggal ethanol/year (www.energy.gov/sites). Bioethanol prices are highly volatile and, as such, industry searches for new opportunities in the production of added value products from 2G hydrolysates. Frank (2010) estimated that the economic potential of biochemicals produced from lignocellulosic residues could reach a value of~$1 trillion USD, and that more than 60% of the world's most-used chemicals could be synthesized from lignocellulose. In spite of these promising prospects, over the last decade the number of ongoing large-/medium-scale biosynthetic processes that use biomass has remained rather limited (Aristodou and Penttil€ a, 2000;Chandel et al., 2018). Examples of chemicals produced from 2G hydrolysates at different scales are as follows: ethanol, lactic acid, succinic acid, butanol, acetone, sorbitol and itaconic acid Chandel et al., 2018). We now add to this list, the production of phenylalanine, an aromatic amino acid that can be transformed into other chemicals such as transcinnamic acid and styrene (McKenna and Nielsen, 2011;Molina-Santiago et al., 2016). A number of Pseudomonads expressing heterologous genes produced surfactants, organic acids, terpenoids, phenazines and bioplastics from glucose (Rojas et al., 2003;Wittgens et al., 2011;Loeschcke and Thies, 2015;Tiso et al., 2017;Wittgens et al., 2018;Wynands et al., 2018). We envisage that the production of this set of chemicals from 2G sugars will be feasible once one of the xylose catabolic pathways described by Bator et al. (2020) is incorporated into these strains. A true challenge in using 2G technology at the industrial level is not only the conversion of highly recalcitrant and low solubility lignocellulose into monomeric sugars but also their efficient use for chemical production. This requires a series of physico-chemical treatments to deconstruct plant structures and make cellulose and hemicellulose available for subsequent enzymatic hydrolysis to yield soluble sugars. Figure 2 shows that diluteacid steam explosion pretreatment of herbaceous residues resulted in appropriate levels of biomass disorganization of corn stover and sugarcane straw. This specific pretreatment released about 30% to 50% of xylose in the polymers but less than 3% of the glucose (see Table S1 and Rocha-Mart ın et al., 2018). Subsequent action of cellulase cocktails can release up 80% of total sugars as monomers ( Alvarez et al., 2016). In addition to sugars, the 2G hydrolysates contain acetic acid, furfural and lignin monomers. It is known that some of the chemicals generated from biomass during the pretreatment process are inhibitors of growth. Industrial production of biochemicals from 2G hydrolysates requires as high as possible load of 2G hydrolysates to warrant high production of commodities. The typical 2G bioethanol production processes operate at a solid ratio of 20% (Alc antara et al., 2016); but to generate economic returns, the yields should be close to the theoretical, which in the case of 2G bioethanol production requires that the strains of Saccharomyces used in the process should consume > 96% glucose and > 90% xylose . Co-utilization of different sugars from lignocellulose by industrial microbes is complex and often subject to catabolite repression which most often prioritizes the use Assay conditions are described under experimental procedures. The data in the table are the average of at least two assays. Concentration of chemicals is in mg l À1 . Standard deviations were below 15% of the given values. of glucose over other sugars. We have found that this is not the case in our constructs as glucose and xylose are simultaneously consumed. In fact, P. putida derivatives bearing xylABE genes consumed >96% of the sugars released in the saccharification step. The rates of substrate utilization (0.22-0.3 g l À1 h À1 ) were superior to those reported with engineered Saccharomyces to produce ethanol with 2G substrates (Heer and Sauer, 2008). This, in turn, resulted in the highest concentrations of L-phenylalanine being generated in quite a short time, which in industrial terms will lead to a reduction in the number of fermenters per plant, with a consequent saving in the initial investment, a relevant factor to make the 2G technology economically viable . A current limitation in the use of Pseudomonas as a 2G platform is that P. putida growth in 2G hydrolysates was inhibited when the initial substrate loads were > 5% (w/v). Acetatewhich is among the potential inhibitors of growthseems not to be the responsible for P. putida growth inhibition, because the strain naturally uses acetic acid as a C source and consumed > 85% of the initial acetate in the 2G hydrolysates within 24 h. Other inhibitors present in 2G hydrosylates are furfural, and hydroxymethylfurfural, which are inhibitors of yeast growth (Heer and Sauer, 2008). Mutant strains of Saccharomyces tolerant to furfural and able to thrive in lignocellulose hydrolysates have been isolated. However, our preliminary results indicate that T1E tolerates concentrations of up to 20 mM furfural which is higher than those present in 2G hydrolysates; at present, we cannot discard the additive effects of different chemicals as being responsible for substrate toxicity. We are setting up a Laboratory Adaptation Evolution programme to select P. putida strains able to thrive in up to 20% (w/v) 2G hydrolysate load. These mutants are expected to be profitable not only for synthesis of L-phenylalanine, as determined in this study, but in general to exploit P. putida industrially to achieve high product concentrations from high initial substrate loads. In summary, we show that P. putida DOT-T1EDgcd (xylABE) is able to grow using the C-sources available in 2G hydrolysates. As P. putida can be easily manipulated to express heterologous pathways, it has high potential for further development as an industrial platform. Experimental procedures Bacterial strains, plasmids and growth conditions All bacterial strains and plasmids used in this study are listed in Table 1. Escherichia coli strains were grown in LB medium, while Pseudomonas putida strains were grown in LB medium or in M9 minimal medium routinely with glucose 5 g l À1 (Abril et al., 1989). Liquid cultures were incubated in a K€ uhner incubator at 30°C with agitation (200 rpm). DNA techniques and plasmid and strain constructions General techniques. DNA was manipulated using standard laboratory protocols (Sambrook and Russell, 2001). Genomic DNA was isolated using the Wizard Genomic DNA purification Kit (Promega USA), while plasmid DNA was isolated with the QIAprep Spin Miniprep kit (Qiagen, USA). DNA concentration was measured with a Nano Drop One C (Thermo Scientific, USA). PCR DNA amplification was performed with appropriate primers (see Table S4), dNTPs and Q5 high fidelity DNA polymerase (New England BioLabs, USA) or Taq DNA polymerase (Roche, Germany), as recommended by the manufacturers. Electroporation. Electroporation of E. coli DH5a and different Pseudomonas putida strains with ligation mixtures or complete plasmids was performed as described elsewhere (Aparicio et al., 2015), using a MicroPulser electroporator and Gene Pulser Cuvettes with 0.2 cm gap (Bio-Rad, USA). Transformants were selected on LB agar plates with Km (25 µg ml À1 ) or Gm (10 lg ml À1 ) incubated at 30°C for 24 to 36 h. Construction of mutants deficient in Gcd activity. Inactivation of T1E_2822 encoding glucose dehydrogenase (Gcd) was achieved as follows: a 1009 bp DNA fragment spanning the central part of this ORF was amplified by PCR from P. putida DOT-T1E genomic DNA using primers T1E_2822Fw and T1E_2822Rv (Table S4). The resulting amplified DNA was cloned into pMBL to yield pMBL::T1E_2822. This plasmid was then digested with BamHI and then ligated to the BamHI kanamycin Ω-interposon fragment from plasmid pHP45Ω-Km. The resulting chimeric DNA was named pMBL::T1E_2822ΩKm. This plasmid was electroporated into P. putida DOT-T1E and P. putida 5PL, respectively, and putative Km R recombinant mutants were selected on kanamycin LB plates. A number of Km R strains were retained, and Southern blotting was used to verify the insertional mutation in the gcd gene (see Fig. S2). A gcd mutant strain from DOT-T1E and from 5PL parental strains, referred to as P. putida DOT-T1EDgcd and 5PLDgcd, respectively, were retained for further study (Fig. S2). Growth parameters For growth experiments with different carbohydrates, overnight cultures of P. putida were grown in M9 minimal medium with glucose as sole carbon source, harvested by centrifugation (13 000 g, 5 min) and washed once with M9 minimal medium (Abril et al., 1989). Then, cells were suspended to an OD 660 of~0.1 in 30 ml of the test medium. When required, kanamycin (25 µg ml À1 ) or gentamicin (10 lg ml À1 ) were added. Doubling times were determined during exponential growth as a slope of the data points obtained by plotting CFU ml À1 or turbidity at 660 nm against time. For cell dry weight (CDW) determination, samples of cultures grown in M9 minimal medium with 5 g l À1 carbon source, namely glucose, xylose or a mixture of glucose:xylose (3:1, v:v), were transferred into 2 ml preweighed Eppendorf tubes and pelleted at 13 000 g for 10 min. The pellets were washed once with 1xM9 buffer and left to dry at 70°C for 48 h. Substrate consumption rates and specific carbon consumption rates (qs) were determined during the initial 24 h of culture as described (Dvor ak and de Lorenzo, 2018). Other analytical techniques Cell growth was routinely monitored at 660 nm using a Perking Elmer Lambda 20 UV VIS spectrophotometer (USA). A turbidity of 1.0 is equivalent to 414 mg l À1 of cell dry weight (CDW). For determination of metabolites, cultures (1 ml) were centrifuged (13 000 g, 4°C, 10 min), and 0.5 ml of the supernatants was stored at À20°C until analysed. For analysis of sugars and acetic acid, D-GLUCOSE-HK, K-XYLOSE, K-ARGA and K-ACETRM Assays Kits (Megazyme, Ireland) were used according to the manufacturer's instructions. Measurements were performed using a TECAN Sunrise 200 microplate absorbance reader (Tecan GmbH, Austria). L-Phenylalanine levels were determined in culture supernatants using an Agilent/HP 1050 HPLC System (Agilent/HP, USA), equipped with a Nova-Pak C18 column (4 µm, 3.9 mm 9 150 mm, Waters) and coupled to a DAD detector. MilliQ H 2 O acidulated with 0.1% (v/v) H 3 PO 4 (A) and acetonitrile:H 2 O (90:10, v/v), also supplemented with 0.1% H 3 PO 4 (B), were used as eluents. Samples (20 µl) were injected for analysis at a constant flow rate of 0.85 ml min À1 for isocratic separation using a mixture of 75% (v/v) A and 25% (v/v) B. When an elution gradient was required, we used the same eluents with the following ramp of solvents and times: Method started with 2 min 100% A; then, mobile phase changed to 30% B within 8 min, followed by a 2 min hold time. Then, the composition changed to 100% B within 2 min, followed by a 2 min hold time. Finally, the mobile phase was returned to the initial conditions in a 2 min linear gradient. Column temperature was 20°C. L-Phenylalanine was monitored at 215 nm. Enzymatic hydrolysis Pretreated Corn Stover (PCS) or Pretreated Sugarcane Straw (PSCS) were prepared by steam explosion with diluted sulfuric acid at the Abengoa Bioenergy Biomass Pilot Plant in York (Iowa, USA), following the procedure described by Alc antara et al. (2016). Hydrolysis of PCS or PSCS (10 g l À1 ) was performed in 100 mL borosilicate conical flasks containing 1xM9 with pH adjusted to 5.5. The enzymatic cocktail Viscozyme (Sigma-Merck) was added as indicated in Table S3. Glucan content had been previously determined according to the standard biomass analytical procedures by NREL (Rocha-Mart ın et al., 2018). Samples were incubated in an orbital incubator at 50°C with shaking at 70 rpm for 24 h. Scanning Electron Microscopy Samples were fixed in a 2.5% (v/v) glutaraldehyde solution in 0.1 M cacodylate buffer (pH 7.4) for 24 h at 4°C. Samples were washed in the same buffer (3 times for 15 min, at 4°C). Post-fixation was performed with 1% (w/v) osmium tetroxide in the dark for 1 h at room temperature. Samples were then washed 3 times for 5 min with distilled water. They were then dehydrated in a gradient of increasing ethanol concentrations from 50% to 100% ethanol and submitted to Critical Point (Anderson, 1951) with carbon dioxide in a Leica EM CPD300 desiccator. Finally, samples were coated by evaporating them with carbon in an EMI-TECH K975X Carbon evaporator and examined in a highresolution scanning electron microscope [HRSEM] Auriga (fib-fesem) manufactured by Carl Zeiss. Education and counted with FEDER support. We thank Ben Pakuts for critical reading of the manuscript. Supporting information Additional supporting information may be found online in the Supporting Information section at the end of the article. Fig. S1. Southern blot assay to check P. putida DOT-T1E and P. putida 5PL gcd O-Km mutant strains. The gcd O-Km mutant strains are represented with letter "M", control strains (P. putida DOT-T1E and P. putida 5PL) are represented with letter "C". Probe used for the southern blot assay was the 1009 pb gcd-fragment amplified by PCR with T1E_2822Fw and T1E_2822Rv primers, as described in Experimental Procedures Fig. S2. Growth curves of (A) P. putida DOT-T1E and (B) P. putida DOT-T1E 5PL and their recombinant strains in minimal medium with 5 g L -1 D-glucose. Experiments were carried out as described in Experimental procedures. Blue lines, wild-type strains; red lines, Dgcd mutant strains; green lines, wild-type strains bearing pSEVA633_xylABE; purple lines, Dgcd mutant strains bearing pSEVA633_xylABE. Curves represent the average of at least two independent assays performed in triplicate. Standard deviations are below 10% of the given values.
The role of perivascular adipose tissue in obesity‐induced vascular dysfunction Under physiological conditions, perivascular adipose tissue (PVAT) attenuates agonist‐induced vasoconstriction by releasing vasoactive molecules including hydrogen peroxide, angiotensin 1–7, adiponectin, methyl palmitate, hydrogen sulfide, NO and leptin. This anticontractile effect of PVAT is lost under conditions of obesity. The central mechanism underlying this PVAT dysfunction in obesity is likely to be an ‘obesity triad’ (consisting of PVAT hypoxia, inflammation and oxidative stress) that leads to the impairment of PVAT‐derived vasoregulators. The production of hydrogen sulfide, NO and adiponectin by PVAT is reduced in obesity, whereas the vasodilator response to leptin is impaired (vascular leptin resistance). Strikingly, the vasodilator response to acetylcholine is reduced only in PVAT‐containing, but not in PVAT‐free thoracic aorta isolated from diet‐induced obese mice, indicating a unique role for PVAT in obesity‐induced vascular dysfunction. Furthermore, PVAT dysfunction has also been observed in small arteries isolated from the gluteal/visceral fat biopsy samples of obese individuals. Therefore, PVAT may represent a new therapeutic target for vascular complications in obesity. A number of approaches are currently being tested under experimental conditions. Potential therapeutic strategies improving PVAT function include body weight reduction, enhancing PVAT hydrogen sulfide release (e.g. rosiglitazone, atorvastatin and cannabinoid CB1 receptor agonists) and NO production (e.g. arginase inhibitors), inhibition of the renin–angiotensin–aldosterone system, inhibition of inflammation with melatonin or cytokine antagonists, activators of AMP‐activated kinase (e.g. metformin, resveratrol and diosgenin) and adiponectin releasers or expression enhancers. Linked Articles This article is part of a themed section on Molecular Mechanisms Regulating Perivascular Adipose Tissue – Potential Pharmacological Targets? To view the other articles in this section visit http://onlinelibrary.wiley.com/doi/10.1111/bph.v174.20/issuetoc Introduction The central role of the cardiovascular system is the transportation of oxygen, nutrients, biomolecules and signalling molecules to organs, tissues and cells. In addition, the circulation system is important in host defence by the immune system and in blood haemostasis/coagulation (Daiber et al., 2017). A dysregulation of vascular function may result in increased peripheral vascular resistance and blood pressure. Furthermore, vascular dysfunction promotes atherogenesis, and exacerbates insulin resistance by limiting the nutritive flow downstream in the microcirculation (Yudkin et al., 2005;Greenstein et al., 2009;Fuster et al., 2016). For decades, the endothelium has been the focus of vascular research. In recent years, however, the importance of other vascular cells has been increasingly recognized, including vascular smooth muscle cells (VSMCs) (Lacolley et al., 2012), adventitia cells (Stenmark et al., 2013;Wu et al., 2015) and cells in the perivascular adipose tissue (PVAT). Moreover, communications exist between the different vascular cells and between the different layers of the vascular wall (Campbell et al., 2012). Therefore, the vascular wall should be regarded as a whole rather than separate layers. The present article focusses on the role of PVAT in regulating vascular function. PVAT: general aspects PVAT surrounds large arteries and veins, small and resistance vessels, and skeletal muscle microvessels. Other microvasculatures and the cerebral vasculature are free of PVAT (Brown et al., 2014;Gil-Ortega et al., 2015). In contrast to humans and other large experimental animals (including rabbits and pigs), the murine coronary artery is without PVAT (Brown et al., 2014). Large vessels (such as the saphenous vein) are separated from PVAT by an anatomical barrier that is rich in collagen bundles, elastin networks, VSMCs, fibroblasts, autonomic nerve endings and vasa vasorum (Ahmed et al., 2004). In small vessels and microvessels, however, PVAT is an integral part of the vascular wall without laminar structures or any organized barrier separating PVAT from the adventitia (Szasz and Webb, 2012;Gil-Ortega et al., 2015). It is proposed that factors released by the PVAT reach the medial and endothelial layer of blood vessels either by direct diffusion or via the vasa vasorum (Gil-Ortega et al., 2015). Another possible way for PVAT-derived factors to reach the inner layers of the vascular wall is via the small media conduits, a dense reticular network of collagenous conduits connecting the medial layer with the underlying adventitia (Grabner et al., 2009;Campbell et al., 2012). These conduits enable soluble molecules to traffic between the PVAT/adventitia and media/intima. In a murine model with VSMC-specific PPAR-γ deletion, the animals are completely devoid of PVAT in the aortic and mesenteric regions. In contrast, interscapular brown adipose tissue (BAT) and gonadal/inguinal/subcutaneous white adipose tissue (WAT) in these animals remain intact. These results indicate that the origin of the adipocytes in the PVAT is different from those in WAT and BAT (Chang et al., 2012;Brown et al., 2014). This notion is consistent with a previous observation that PVAT is a functionally specialized type of Leptin receptor β 3 adrenoceptor Enzymes adipose tissue and PVAT adipocytes differ inherently in developmental and secretory properties from adipocytes in other fat depots (Chatterjee et al., 2009). Indeed, PVAT adipocytes are likely to arise from VSMC progenitors (Brown et al., 2014;Omar et al., 2014;Gil-Ortega et al., 2015). Importantly, regional phenotypic and functional differences exist among PVAT depots (Brown et al., 2014;Gil-Ortega et al., 2015). Depending on the vascular bed, PVAT can be WAT-like (e.g. murine mesenteric PVAT), BAT-like (e.g. PVAT of the murine thoracic aorta) or mixed adipose tissue (e.g. PVAT of the murine abdominal aorta) and may have different vascularization, innervation and adipokine profiles (Szasz and Webb, 2012;Padilla et al., 2013;Brown et al., 2014;Gil-Ortega et al., 2015;Drosos et al., 2016;Victorio et al., 2016b). Tables of Links The morphological properties of PVAT in other species are less well-defined than murine PVAT. Human coronary PVAT exhibits a histological appearance and gene expression pattern more consistent with WAT rather than BAT (Chatterjee et al., 2009;. A role for PVAT in vascular function was first indicated by the observation that PVAT decreased the contractile responses to noradrenaline in rat aorta (Soltis and Cassis, 1991). Now, it is known that PVAT attenuates the vascular responsiveness to several (hormonal) agonists, including phenylephrine, angiotensin II (Ang II), 5-HT and endothelin-1 (ET-1) (Gollasch, 2012;Szasz and Webb, 2012). PVAT may regulate vascular tone by releasing bioactive molecules, which can be regarded as an endocrine-related effect. In addition, PVAT also exerts a more direct, local effect on the vascular wall via paracrine mechanisms. The PVAT is composed mainly of adipocytes and releases a wide range of biologically active molecules that modulate vascular function (Szasz and Webb, 2012). Factors that mediate the anticontractile effects of PVAT are referred to as adipocytederived relaxing factor (ADRF) (Lohn et al., 2002;Gollasch, 2012), or perivascular-derived relaxing factors (PVRF) (Lee et al., 2011). The mechanism of action of some ADRF has been shown to rely on the opening of smooth muscle K + channels, especially the KCNQ-type of the voltage-gated (K v ) K + channels (Lohn et al., 2002;Gollasch, 2012), whereas ATP-dependent K + (K ATP ) channels have only minor effects (Schleifenbaum et al., 2010;Gil-Ortega et al., 2015). Indeed, pharmacological openers of KCNQ channels mimic the effects of ADRF in spontaneously hypertensive rats as well as in a mouse model of metabolic syndrome (the New Zealand Obese mouse) (Zavaritskaya et al., 2013;Tano et al., 2014). The identity of ADRF/PVRF is a matter of ongoing debate and it is likely to be a combination of several different molecules, depending on the stimulus applied, the vascular bed examined, and the phenotypic state of the PVAT (Figure 1). H 2 O 2 Bioassay experiments indicate that PVAT can exert its anticontractile effects through two distinct mechanisms: by releasing transferable relaxing factors and by an endothelium-independent mechanism involving H 2 O 2 and subsequent activation of soluble guanlylyl cyclase (sGC) (Gao et al., 2007). PVAT-derived H 2 O 2 has been shown to be one of the factors mediating the endothelium-independent anticontractile property of PVAT. Because H 2 O 2 is membrane-permeable, PVAT-derived H 2 O 2 can easily diffuse to underlying smooth muscle cells and induce vasodilatation by acting as a non-NO sGC activator (Gao et al., 2007;Fernandez-Alfonso et al., 2013). Moreover, H 2 O 2 can directly activate cGMP-dependent PKGIα independently of cGMP (Burgoyne et al., 2007). Adiponectin It is controversial whether adiponectin is an ADRF/PVRF. Aortae and mesenteric arteries from adiponectin-knockout mice have been shown to maintain their anticontractile properties (Fesus et al., 2007). However, recent studies demonstrate that the anticontractile activity of PVAT is significantly reduced in adiponectin-deficient mice (Lynch et al., 2013;Withers et al., 2014b), and the anticontractile function of PVAT can be largely abolished by blocking adiponectin receptors (Greenstein et al., 2009;Lynch et al., 2013). Undisputedly, adiponectin is produced in adipose tissues (also in PVAT) and is a potent vasodilator. Circulating adiponectin has been shown to be directly related to endothelial function in the general population (Saarikoski et al., 2010), although some controversy exists (Ran et al., 2010). Adiponectin is an independent predictor of endothelial function in a well-phenotyped cohort of patients with coronary artery disease (Margaritis et al., 2013), and genome-wide association studies have revealed that genetically-determined lower circulating adiponectin levels are related to worse endothelial function (Margaritis et al., 2013). Adiponectin induces vasodilatation through multiple mechanisms. (i) Adiponectin can act directly on VSMC, causing myocyte hyperpolarization by activating TRPM4 channels followed by the opening of large-conductance Ca 2+ -activated K + channels (BK Ca ) (Lynch et al., 2013;Weston et al., 2013). (ii) Adiponectin stimulates NO release from adjacent adipocytes through paracrine mechanisms. This adipocyte-derived NO potentiates BK Ca opening in VSMC . (iii) Adiponectin can enhance endothelial NO production by stimulating the binding of HSP90 to endothelial NOS (eNOS) (Xi et al., 2005), by enhancing eNOS phosphorylation at serine 1177, which is catalysed by either PI3K/Akt (Xi et al., 2005;Cerqueira et al., 2012;Margaritis et al., 2013) or AMP-activated kinase (AMPK) (Withers et al., 2014a), and by increasing the biosynthesis of tetrahydrobiopterin, an essential cofactor of eNOS (Margaritis et al., 2013). Consistently, adiponectin-knockout PVAT-derived vasoactive molecules. Methyl palmitate produced by PVAT adipocytes (AC) causes vasodilatation by opening the K v channels on VSMC. H 2 S is synthesized in PVAT by CSE and induces VSMC hyperpolarization by stimulating KCNQ-type K v or K ATP channels. Leptin induces endothelium-dependent vasodilatation by stimulating leptin receptor (LepR), which leads to activation of eNOS via a pathway involving AMPK and Akt and to H 2 S production. This H 2 S functions as an EDHF and activates endothelial small (SK Ca ) and intermediate (IK Ca ) conductance calciumdependent K + channels via autocrine mechanisms. The resulting hyperpolarization of endothelial cells can be transmitted to VSMC by electrical coupling through myoendothelial gap junction (MEGJ). Leptin also causes endothelium-independent vasodilatation by inducing VSMC hyperpolarization through unknown mechanisms. NO and H 2 O 2 released from PVAT can elicit vasodilatation by activating sGC leading to the synthesis of cGMP. Adiponectin released from PVAT AC can be enhanced by stimulation of β 3 adrenoceptors (β3) and by the NO-cGMP-PKG pathway. Adiponectin exerts multiple vascular effects: it stimulates NO production from PVAT and from endothelial cells and induces VSMC hyperpolarization by activating TRPM4 channels followed by opening BK Ca ; Ang 1-7 produced by PVAT acts on endothelial Ang 1-7 receptor (Mas; MAS1 receptor) thereby stimulating endothelial NO production. Besides stimulating sGC activity, NO from PVAT and endothelial cells can also induce/ potentiate VSMC hyperpolarization through K Ca or BK Ca . Partly adopted from (Beltowski, 2013;Weston et al., 2013;Withers et al., 2014a). mice exhibit reduced eNOS phosphorylation at serine 1177 and impaired endothelial function (Cao et al., 2009). Moreover, adiponectin produced in PVAT has also been shown to exert a paracrine effect on the underlying arterial wall by suppressing NADPH oxidase activity via a PI3K/ Akt-mediated inhibition of Rac1 and down-regulation of p22phox gene expression (Antonopoulos et al., 2015b). Importantly, adiponectin plays a central role in mediating crosstalk between different vascular cells ( Figure 1). Through paracrine mechanisms, adiponectin from PVAT adipocytes regulates NO production in adjacent adipocytes (Withers et al., 2014a,b). As mentioned above, PVAT adiponectin induces vasodilatation by stimulating endothelial cells and VSMC hyperpolarization, and reduces vascular oxidative stress by inhibiting NADPH oxidase activity. As a feedback mechanism, oxidative stress in the vascular wall up-regulates PVAT adiponectin expression by activating PPARγ (Antonopoulos et al., 2015b). Methyl palmitate A study using the superfusion bioassay cascade technique has revealed that PVAT releases a lipophilic, heat-stable factor that causes vasodilatation by opening the K v channels on VSMC (Lee et al., 2011). This factor has been identified as palmitic acid methyl ester (PAME). PAME derives from PVAT adipocytes and is released from PVAT spontaneously in Ca 2+containing solutions. PAME release is inhibited in Ca 2+ -free conditions and enhanced by the calcium ionophore A23187 (Lee et al., 2011). The anticontractile function of PVAT was found to be reduced in spontaneously hypertensive rats as well as the release of PAME; both mechanisms contributing to the pathogenesis of hypertension (Lee et al., 2011). H 2 S H 2 S is synthesized from L-cysteine by cystathionineβ-synthase (CBS) or cystathionine-γ-lyase (CSE), and is enzymatically metabolized in mitochondria by sulfide:quinone oxidoreductase . In endothelial cells, H 2 S is produced by CSE and is considered the most likely candidate for endothelium-derived hyperpolarizing factor (EDHF) (Yang et al., 2008;Wang et al., 2015). In an autocrine mode, endothelial H 2 S activates small and intermediate conductance calcium-dependent K + channels on endothelial cells (Mustafa et al., 2011). The resulting hyperpolarization of endothelial cells can be transmitted to VSMC by electrical coupling through myoendothelial gap junction or by increased K + efflux, which then activates VSMC inwardly-rectifying K + channels and/or Na + /K + -ATPase (Feletou and Feletou and Vanhoutte, 2009;Jamroz-Wisniewska et al., 2014). In a paracrine mode, endotheliumgenerated H 2 S can directly induce VSMC hyperpolarization by opening K ATP channels in these cells (Wang et al., 2015). Recent studies indicate that PVAT also produces H 2 S via CSE activity, and that PVAT-derived H 2 S is involved in the anticontractile effect of PVAT on adjacent vessels (Fang et al., 2009;Schleifenbaum et al., 2010;Kohn et al., 2012). H 2 S donors produce a strong vasorelaxation of VSMC by opening the KCNQ-type K v channels (Kohn et al., 2012), perhaps also partly by opening K ATP channels (Jamroz-Wisniewska et al., 2014). The anticontractile effect of PVAT can be reduced by inhibition of CSE (but not CBS) in aortas from rats but not from mice, indicating that endogenous H 2 S is an ADRF from rat (but not mouse) aortic PVAT (Kohn et al., 2012). NO It has been demonstrated that the anti-contractile effects of PVAT cannot be blocked by inhibiting NOS (Lohn et al., 2002). It was, therefore, concluded that that the PVATderived relaxing factor was not NO. Now, it is clear that ADRF represents more than one single molecule, and blockade of NO synthesis alone may not be sufficient to prevent the effect of PVAT. Recent studies demonstrate that NO is produced within PVAT and this PVAT-derived NO contributes to the anticontractile effect of PVAT (Dashwood et al., 2007;Gil-Ortega et al., 2010;Withers et al., 2014b;Aghamohammadzadeh et al., 2015;Bussey et al., 2016;Xia et al., 2016;Victorio et al., 2016b). Immunohistochemistry analysis has shown eNOS staining in adipocytes as well as endothelial cells of the capillaries and vasa vasorum within PVAT (Dashwood et al., 2007). The NO produced in PVAT can be directly visualized in situ with fluorescence imaging (Gil-Ortega et al., 2010;Xia et al., 2016;Victorio et al., 2016b). Adipocyte-derived NO is supposed to be released into the interstitial fluid and to diffuse into the capillaries and adjacent arterioles causing vasodilatation (Mastronardi et al., 2002). In myograph experiments, N G -nitro-L-arginine methyl ester (L-NAME) induces vasoconstriction, which reflects the amount of basal NO released. The L-NAME-induced vasoconstriction in small arteries isolated from visceral fat of healthy individuals is reduced by the removal of PVAT (Virdis et al., 2015), indicating that PVAT contributes to vascular NO production. In PVAT-intact, endothelium-denuded rat mesenteric arteries, eNOS inhibitors significantly enhance noradrenaline-induced contractions, indicating that PVATderived NO contributes to the anticontractile effect of PVAT independently of the endothelium (Aghamohammadzadeh et al., 2015;Bussey et al., 2016). Multiple mechanisms may mediate the vasorelaxant action of PVAT-derived NO ( Figure 1): (i) PVAT NO may diffuse into adjacent smooth muscle cells and induce vasodilatation by stimulating cGMP synthesis; (ii) NO derived from eNOS in adipocytes positively regulates adiponectin release (Withers et al., 2014b); and (iii) adipocyte-derived NO modulates BK Ca in smooth muscle cells and potentiates hyperpolarization . Leptin Leptin is mainly produced by adipocytes and plays important roles in regulating appetite, energy expenditure, inflammation and immune responses (Molica et al., 2015). At the clinical level, its role in coronary heart disease still remains controversial (Antonopoulos et al., 2015a). Leptin is also produced by PVAT (Dashwood et al., 2011;Galvez-Prieto et al., 2012), although it is not considered an ADRF. The anticontractile effect of PVAT is maintained in the Zucker fa/fa rats that lack functional leptin receptors (Lohn et al., 2002). Under physiological conditions, leptin has no acute effect on blood pressure because it activates both pressor (sympathetic nervous system) and depressor (vasodilatation and natriuresis) mechanisms in a balanced manner (Jamroz-Wisniewska et al., 2014). Through central mechanisms, leptin increases sympathetic nervous activity to the kidney, the adrenal gland, the hindlimbs and to BAT (Haynes et al., 1997;Mark, 2013). However, leptin also directly induces relaxation of blood vessels. Acute infusion of leptin has no significant effect on blood pressure in rats (Haynes et al., 1997) or in humans (Brook et al., 2007). Interestingly, leptin's dual effect on blood pressure can be revealed by various procedures (Fruhbeck, 1999). When NO synthesis is inhibited, leptin administration to Wistar rats produces a statistically significant increase in blood pressure, whereas a hypotensive response is induced by leptin in the presence of ganglionic blockade (Fruhbeck, 1999). Leptin may cause vasodilatation in different vessels through different mechanisms. In large arteries (e.g. the aorta), leptin induces an endothelium-dependent vasodilatation that is the result of the release of NO (Lembo et al., 2000). Leptin-stimulated endothelial NO production is mediated by the sequential activation of an AMPKα1-Akt-eNOS pathway leading to eNOS phosphorylation at serine 1177 (Vecchione et al., 2002;Procopio et al., 2009). In small arteries (e.g. mesenteric arteries), both NO and EDHF play a role in leptininduced relaxation (Lembo et al., 2000;Jamroz-Wisniewska et al., 2014). The leptin-induced vasorelaxation mediated by EDHF, is at least in part, a result of CSE-dependent H 2 S production in endothelial cells (Jamroz-Wisniewska et al., 2014). In human saphenous vein and internal mammary artery (Momin et al., 2006), and to a small extent in rat mesenteric artery (Jamroz-Wisniewska et al., 2014), leptin induces an endothelium-independent vasodilatation that is mediated by hyperpolarization of the VSMCs. In coronary artery bypass surgeries, saphenous vein grafts with PVAT exhibit superior patency rate and better preserved intimal, medial and adventitial architecture compared with those without PVAT (Verma et al., 2014;Kopjar and Dashwood, 2016), demonstrating the beneficial effects of PVAT in vivo. Leptin concentrations in the PVAT of saphenous vein grafts are within the concentration range causing relaxation of bypass conduits (Dashwood et al., 2011). It has therefore been proposed that PVAT-derived leptin may play a role in preserving a 'healthy graft' by reducing vasospasm at graft harvesting as well as after its implantation into the coronary circulation (Dashwood et al., 2011;Dashwood and Tsui, 2013). However, there is no evidence for a causal link between PVAT-derived leptin and the superior performance of PVAT-containing grafts. The beneficial effects of PVAT in this regard can also be attributed to other vasoactive molecules released by PVAT. PVAT dysfunction in obesity Obesity has numerous adverse effects on the circulation and cardiovascular structure and function (Lavie et al., 2009). Obese patients are more likely to develop hypertension and cardiomyopathy, and have a higher risk of stroke (Lavie et al., 2009). Obesity and atherosclerosis have long been linked in observational studies. The two conditions share similar pathophysiological pathways, including dyslipidaemia and chronic inflammatory processes (Rocha and Libby, 2009). In humans, the thoracic peri-aortic fat mass correlates with hypertension, diabetes and aortic/coronary calcification, when corrected for body-mass index (but not if corrected for visceral adipose tissue) (Lehman et al., 2010). Clinical studies have shown that abdominal adiposity is associated with vascular dysfunction, as measured by flow-mediated dilation of the brachial artery (Brook et al., 2001;Hamburg et al., 2008). In human small arteries, the anticontractile effect of PVAT is completely lost in obese patients with metabolic syndrome (Greenstein et al., 2009). In animal models of obesity, the PVAT mass and adipocyte size are increased (Marchesi et al., 2009;Ketonen et al., 2010), which is accompanied by other structural modifications in the PVAT (Szasz and Webb, 2012). The anticontractile effect of PVAT is completely lost in the mouse model of dietinduced obesity (Ketonen et al., 2010) and genetic model of metabolic syndrome (the New Zealand obese mouse) (Marchesi et al., 2009), or significantly reduced in the ob/ob mice (Agabiti- Rosei et al., 2014). In addition, the PVAT of obese animals inhibits the vasodilator responses to acetylcholine (Ketonen et al., 2010;Ma et al., 2010;Xia et al., 2016). Obesity-induced dysfunctions of the PVAT correlate with a rise in blood pressure in rodent models of diet-induced obesity (Aghamohammadzadeh et al., 2015), demonstrating the importance of PVAT function in vascular pathology in vivo. Based on current research results, an 'obesity triad' consisting of PVAT hypoxia, inflammation and oxidative stress can be proposed as the central mechanism in obesityinduced PVAT dysfunction (Figure 2). Cellular hypoxia in PVAT is thought to be due to adipocyte hypertrophy twinned with reductions in capillary density and angiogenesis (Pasarica et al., 2009;Fuster et al., 2016). Hypoxia, in turn, stimulates the production of inflammatory cytokines and chemokines from PVAT adipocytes and infiltrating macrophages (Greenstein et al., 2009). Importantly, PVAT inflammation precedes macrophage infiltration. A short-term high-fat diet (HFD) fed to mice for 2 weeks results in a marked up-regulation of pro-inflammatory leptin and MIP1α (also known as CCL3), and down-regulation of anti-inflammatory PPARγ (Chatterjee et al., 2009;Cheang et al., 2015) and adiponectin (Chatterjee et al., 2009). At this early stage, there are no signs of macrophage infiltration in PVAT, although the infiltration of T cells in PVAT is likely (Chatterjee et al., 2009). Thus, the infiltration of macrophages may be a response to PVAT inflammation, probably triggered by chemokines such as MCP-1. PVAT adipocytes produce 50-fold more MCP-1 than adipocytes from other fat regional depots (Chatterjee et al., 2009), and a HFD further enhances MCP-1 expression in aortic PVAT (Ketonen et al., 2010;Xia et al., 2016). Macrophages, in turn, potentiate the PVAT inflammatory response and enhance the activity of NADPH oxidase, a major source of superoxide anion in the vasculature that is also expressed in PVAT (Gao et al., 2006). Indeed, the expression of p67phox (Ketonen et al., 2010) and Nox2 (Xia et al., 2016), subunits in the NADPH oxidase complex, is increased in the PVAT of obese mice. Moreover, HFD-fed mice also show a reduced SOD3 expression and glutathione levels in their mesenteric PVAT (Gil-Ortega et al., 2014). The resulting oxidative stress further enhances PVAT inflammation, leading to a vicious circle (Figure 2). In support of this concept, PVAT dysfunction induced by in vitro incubation with aldosterone (Withers et al., 2011) or cytokines (Greenstein et al., 2009) can be fully restored using a combination of catalase and superoxide dismutase. Ex vivo incubation with superoxide dismutase and catalase also restores the anticontractile function of PVAT from obese individuals (Aghamohammadzadeh et al., 2013(Aghamohammadzadeh et al., , 2015. Similarly, hypoxia-induced PVAT dysfunction can be normalized by in vitro incubation with an anti-TNF-α antibody or an anti-IL-6 antibody (Greenstein et al., 2009), confirming the role of inflammation and oxidative stress in PVAT dysfunction. In agreement with this, ex vivo incubation with the anti-TNF-α antibody infliximab improves NO production in small arteries isolated from obese patients. Moreover, this effect is more pronounced in PVAT-containing vessels than in PVAT-free arteries (Virdis et al., 2015). The removal of macrophages prevents hypoxia-and aldosterone-induced PVAT dysfunction (Withers et al., 2011). Also, preventing macrophage infiltration ameliorates PVAT dysfunction in mice with diet-induced obesity (Wang et al., 2012). Local inflammation in PVAT is potentiated by systemic inflammation and inflammation of other adipose tissues in the obesity state, where adipose tissue mass can range from 30 to 50% of total body mass (Fuster et al., 2016). Adipose tissue is the major source of IL-6, contributing as much as one third of total circulating IL-6 (Mohamed- Ali et al., 1997). Similarly, leptin is an adipose tissue-specific adipokine. It is highly expressed in adipocytes, and circulating leptin levels increase in parallel with adipose tissue mass (Fuster et al., 2016). Thus, IL-6, leptin and other pro-inflammatory factors released from other adipose tissues can reach PVAT via the circulation and contribute to PVAT inflammation. The obesity triad of hypoxia, inflammation and oxidative stress in PVAT ultimately leads to a dysregulated Figure 2 Mechanisms of PVAT dysfunction in diet-induced obesity. HFD-induced adipocyte hypertrophy leads to hypoxia and the production of pro-inflammatory cytokines and chemokines, activation of NADPH oxidase and down-regulation of antioxidant enzymes (e.g. superoxide dismutase and peroxiredoxin-1) and non-enzymatic antioxidants (e.g. glutathione). Infiltrating immune cells potentiate PVAT inflammation and oxidative stress. Chronic hyperleptinaemia leads to vascular leptin resistance (loss of leptin-induced vasodilatation) and potentiation of PVAT inflammation. Long-term obesity decreases PVAT H 2 S production by down-regulating CSE expression. The up-regulation of arginases leads to L-arginine deficiency and eNOS uncoupling (enhanced superoxide production and reduced NO production by eNOS). PVAT adiponectin expression is reduced in obesity, very likely due to a down-regulation of PPARγ. Normally, NO stimulates adiponectin secretion and adiponectin increases PVAT NO production. This positive feedback mechanism is impaired in obesity. Dysregulation of PVAT-derived vasoregulators in obesity Under conditions of obesity, the synthesis, secretion or action of PVAT-derived vasoactive molecules are impaired (Figure 2). H 2 S production by PVAT in obesity Experimental obesity induced by a high-calorie (but not high-fat) 'cafeteria' diet is associated with a time-dependent effect on PVAT-produced H 2 S in rats. Feeding for 3 months results in obesity with signs of metabolic syndrome, whereas 1 month feeding leads to adiposity without insulin resistance. Interestingly, short-term obesity (induced by 1 month of cafeteria diet) in rats increases production of H 2 S by PVAT and this is associated with an enhanced anticontractile effect of PVAT, whereas long-term obesity (3 months on cafeteria diet) reduces H 2 S production and the anticontractile effects of PVAT (Beltowski, 2013). The enhanced H 2 S production in the early phase of obesity may represent a compensatory mechanism and result from reduced H 2 S oxidation due to hypoxia. In contrast, H 2 S production is decreased in longlasting obesity and metabolic syndrome because of a downregulation of the CSE enzyme in PVAT (Beltowski, 2013). Plasma H 2 S levels are reduced in overweight people and patients with type 2 diabetes, independently of diabetes. Waist circumference has been shown to be an independent predictor of plasma H 2 S and remains an independent predictor of plasma H 2 S after adjustment for systolic blood pressure, microvascular function, insulin sensitivity, glycaemic control and lipid profile (Whiteman et al., 2010). Adiponectin expression in PVATand its secretion in obesity In humans, circulating adiponectin levels are decreased with increasing obesity (Weiss et al., 2004) and type 2 diabetes (Antonopoulos et al., 2015b). Hypoadiponectinemia is closely associated with endothelial dysfunction (Shimabukuro et al., 2003). A recent study has identified PPARγ as a positive regulator of adiponectin expression in PVAT (Antonopoulos et al., 2015b). Interestingly, PPARγ is down-regulated in the PVAT of diet-induced obese mice (Chatterjee et al., 2009); this may represent a molecular mechanism for the reduced adiponectin expression in PVAT found during obesity. Furthermore, adiponectin secretion is also reduced during obesity. Normally, adiponectin secretion from PVAT adipocytes is fine-tuned; the mechanisms include β 3 -adrenoceptor stimulation , NO production and PKG activation (Withers et al., 2014b). Under conditions of obesity, adiponectin secretion is decreased, which may be attributable to β 3 adrenoceptor desensitization, reduced NO production and PKG down-regulation in PVAT adipocytes (Withers et al., 2014a). PVAT NO production in obesity Although an adaptive overproduction of NO from mesenteric PVAT has been observed at the early phase of diet-induced obesity in C57BL/6J mice (Gil-Ortega et al., 2010), a longer time on a HFD leads to a reduction in PVAT NO production (Aghamohammadzadeh et al., 2015;Xia et al., 2016). Inhibition of NO synthesis enhances the agonist-induced contraction of PVAT-intact, endothelium-denuded mesenteric arteries from healthy rats. However, this effect of NO synthesis inhibition is lost in PVAT-intact vessel segments from diet-induced obese animals (Aghamohammadzadeh et al., 2015;Bussey et al., 2016). These results indicate that the anticontractile effect of healthy PVAT is partially mediated by PVAT-derived NO, whereas obesity reduces PVAT NO production to a level that is too low to be functionally relevant. Consistent with these data, basal NO release has been shown to be reduced in small arteries from obese patients compared with non-obese controls. Interestingly, this obesity-induced reduction in basal NO production is only evident in PVAT-containing but not in PVAT-removed arteries (Virdis et al., 2015), supporting the concept that a reduced production of NO by PVAT is involved in the obesity-induced loss of the anticontractile effect of PVAT. Several molecular mechanisms may be responsible for the reduced PVAT NO production in obesity: (i) A reduced expression of eNOS has been reported in the mesenteric PVAT of diet-induced obese rats (Bussey et al., 2016) and mice (Gil-Ortega et al., 2014). (ii) In the PVAT of thoracic aorta from diet-induced obese mice, however, we did not observe any changes in eNOS expression but have obtained evidence that the function of eNOS is impaired (eNOS uncoupling as well as reduced eNOS phosphorylation at serine 1177 residue) (Xia et al., 2016). (iii) The reduction in PVAT NO can also be partially attributed to a deficiency in PVAT adiponectin that normally stimulates eNOS activity in PVAT adipocytes . In addition to PVAT NO, NO production from the endothelium is also reduced in experimental obesity (Korda et al., 2008;Marchesi et al., 2009;Yu et al., 2014). Vascular leptin resistance in obesity Hyperleptinaemia in diet-induced obesity leads to selective leptin resistance with preservation of leptin-induced increases in renal sympathetic nerve activity and blood pressure despite resistance to the anorexic and weight-reducing actions of leptin (Rahmouni et al., 2005;Coppari and Bjorbaek, 2012;Mark, 2013). Cerebroventricular administration of leptin antagonists significantly decreases mean arterial pressure and renal sympathetic nerve activity in dietinduced obese rabbits, indicating that the central actions of leptin play a role in the elevation of blood pressure and renal sympathetic nerve activity induced by a HFD in rabbits (Lim et al., 2013). Similarly, the elevation in blood pressure induced by chronic leptin infusion can be prevented with combined αand β-adrenoceptor blockade (Shek et al., 1998;Carlyle et al., 2002). Hyperleptinaemia shows a biphasic effect on vascular function. In the early stage of diet-induced obesity (4 weeks), the NO-mediated component of leptin-induced relaxation of mesenteric artery is impaired, which is compensated for by an up-regulation of EDHF-mediated vasodilatation (Beltowski et al., 2010;Jamroz-Wisniewska et al., 2014). Interestingly, the impaired NO component of leptin-induced vasodilatation in obese rats can be restored by antagonizing the leptin receptors, indicating that hyperleptinaemia induces vascular leptin resistance in diet-induced obesity. Consistently, the impairment of leptin-induced, NO-mediated vasodilatation can also be observed in rats treated for 8 days with exogenous leptin (Jamroz-Wisniewska et al., 2014). In the later phase of obesity (3 months), both the NO-and EDHF-mediated effects of leptin are reduced leading to an increase in blood pressure (Beltowski et al., 2009;Jamroz-Wisniewska et al., 2014). Chronic hyperleptinaemia-induced vascular leptin resistance (loss of leptin-induced vasodilatation) has been shown both in vivo in anaesthetised dogs and ex vitro in coronary rings isolated from diet-induced obese dogs (Knudson et al., 2005). Mechanistically, leptin-induced endothelial dysfunction is associated with inflammation, oxidative stress and eNOS uncoupling (Korda et al., 2008). Leptin levels are uniformly increased in diet-induced obesity, not only in the blood but also in PVAT (Ketonen et al., 2010;Schroeter et al., 2013;Gil-Ortega et al., 2014;Xia et al., 2016). In a mouse model of diet-induced obesity, the up-regulation of leptin in aortic PVAT is paralleled by a reduced anticontractile effect of PVAT and correlates with a loss of PVAT-derived NO and PVAT eNOS (Gil-Ortega et al., 2014). Metabolic syndrome in Ossabaw miniature swine increases epicardial PVAT leptin protein and coronary leptin receptor expression. Coronary arteries from swine with metabolic syndrome display significant endothelial dysfunction that is markedly exacerbated by PVAT. A leptin antagonist reverses the metabolic syndrome effect of PVAT on endothelium-dependent vasodilatation, indicating that epicardial PVAT-derived leptin exacerbates coronary endothelial dysfunction in metabolic syndrome (Payne et al., 2010). The role of PVAT in obesity-induced vascular dysfunction In a recent study, we analysed the vascular function of thoracic aorta isolated from diet-induced obese C57BL/6J mice (Xia et al., 2016). Unexpectedly, the endotheliumdependent, NO-mediated vasodilator response to acetylcholine remained unchanged in aortas from mice fed a HFD for 20 weeks ( Figure 3A). Even after varying the experimental conditions (60 or 45 kcal% fat; 20 or 12 weeks on a HFD), we did not observe any endothelial dysfunction in the PVAT-free thoracic aorta of obese mice. Strikingly, when the aortic PVAT was left in place, a clear reduction in the vasodilator response to acetylcholine was observed in the aorta of obese animals as compared with lean controls ( Figure 3B). Acetylcholine-induced vasodilatation in the mouse aorta (either with or without PVAT) can be completely blocked by inhibition of NO synthesis (Figure 3 C), indicating that this response is NO-dependent. Thus, the reduced vasomotor function in the aorta of HFD-fed mice ( Figure 3B) results from eNOS dysfunction in the PVAT, but not in the endothelium. Indeed, we found evidence for PVAT eNOS dysfunction in diet-induced obese mice (Xia et al., 2016). Notably, all the pathological changes (arginase induction, L-arginine deficiency, Akt inhibition and reduced eNOS phosphorylation) in diet-induced obese mice were observed only in the PVAT but not in the aorta itself (Xia et al., 2016). These data provide a compelling mechanistic explanation for our initial observation that vascular dysfunction in obese mice is only evident in PVAT-containing aorta but not in PVATfree aorta. These findings further substantiate our hypothesis that the role of PVAT may be even more important than that of the endothelium in obesity-induced vascular dysfunction under certain experimental settings. Similar evidence also exists in studies with human samples. In small arteries isolated from obese patients, an ET-1/ NO imbalance is evident, which is associated with vascular dysfunction. Interestingly, the reduced NO availability and enhanced ET-1 signalling in arteries from obese patients can be reversed by PVAT removal (Virdis et al., 2015), demonstrating the crucial role of PVAT in obesity-induced vascular dysfunction. Human studies The role of PVAT in regulating vascular function in humans has been studied by using either small arteries from gluteal/visceral fat biopsy samples or coronary bypass graft materials such as the internal mammary artery or the saphenous vein. Currently, PVAT-removed graft materials are mostly used in coronary artery bypass operations. Intimal hyperplasia and vasospasm with a reduction in patency rate are common problems that occur after bypass surgery (Ozen et al., 2015). In this regard, saphenous vein grafts harvested by the 'no-touch' technique with intact adventitia and PVAT have been shown to be superior to those obtained by open vein harvesting or endoscopic vein harvesting with adventitia and PVAT removed (Souza et al., 2001;Dashwood and Tsui, 2013;Dreifaldt et al., 2013;Verma et al., 2014;Kopjar and Dashwood, 2016). Unfortunately, there are yet no such clinical studies addressing the role of internal mammary artery PVAT in coronary artery bypass operations. Obesity in humans is associated with endothelial dysfunction in vivo (Steinberg et al., 1996;Al Suwaidi et al., 2001;Grassi et al., 2010). In human small arteries taken from subcutaneous gluteal fat biopsies, the anticontractile effect of PVAT is shown to be completely lost in obese patients (Greenstein et al., 2009;Aghamohammadzadeh et al., 2013). In obese patients, PVAT undergoes hypoxia, oxidative stress and inflammation, leading to the up-regulation of pro-inflammatory cytokines (e.g. TNF-α) (Greenstein et al., 2009), enhanced ROS production, a down-regulation of antioxidant enzymes (e.g. SOD1, peroxiredoxin-1) (Aghamohammadzadeh et al., 2015) and reduced release of vasodilating adipokines (e.g. adiponectin) (Greenstein et al., 2009) as well as a decreased NO production from PVAT (Aghamohammadzadeh et al., 2015). Ex vivo incubation of arteries from obese individuals with superoxide dismutase and catalase restores the anticontractile effects of PVAT (Aghamohammadzadeh et al., 2013;. In small arteries isolated from visceral fat of obese patients, the expression of TNF-α is increased both in the vascular wall and in PVAT (Virdis et al., 2015). This increase in TNF-α reduces the production of adiponectin, triggers eNOS uncoupling by activating NADPH oxidase activity and stimulates ET-1 generation. The resulting ET-1/NO imbalance is implicated in obesity-induced vascular dysfunction (Virdis et al., 2015). Moreover, the reduced NO availability and enhanced ET-1 signalling in arteries from obese patients can be reversed by PVAT removal (Virdis et al., 2015), indicating that the vasorelaxant effect of PVAT under physiological conditions is transformed into an inflammatory pro-contractile phenotype in obesity (Virdis et al., 2015). Weight loss Calorie restriction-induced sustained weight loss in obese rats leads to an improvement in PVAT function associated with a reversal in obesity-induced hypertension, the restoration of adipocyte size, PVAT eNOS function, PVAT TNF-α expression and normalization of plasma adipokine levels, including leptin and insulin (Bussey et al., 2016). In humans, bariatric surgery restores the anticontractile activity of PVAT in severely obese individuals (Aghamohammadzadeh et al., 2013). The normalization of PVAT function after surgery is accompanied by improvements in insulin sensitivity, serum glycaemic indexes, inflammatory cytokines, adipokine profile and systolic blood Figure 3 Role of PVAT in obesity-induced vascular dysfunction. C57BL/6J mice were fed a HFD or normal control diet (NCD) for 20 weeks starting at the age of 8 weeks. The vasodilator response to acetylcholine (A-C) was performed in noradrenaline-precontracted aorta with or without PVAT in the absence or presence of the NO synthase inhibitor L-NAME. ***P < 0.001, n = 8. To detect PVAT NO production, NCD and HFD aorta samples were mounted back-to-back on the same slide to guarantee identical staining conditions for the two samples (D). NO production in PVAT-containing aorta was determined by 4,5-diaminofluorescein diacetate (DAF-2 DA) staining. From Xia et al. (2016) with permission of Wolters Kluwer Health, Inc. Copyright © 2016, Wolters Kluwer Health. pressure together with increased PVAT adiponectin and NO bioavailability (Aghamohammadzadeh et al., 2013). Strikingly, these changes are evident despite the patients remaining obese (BMI declines from 51 before surgery to 38 after surgery) (Aghamohammadzadeh et al., 2013). Exercise training In familial hypercholesterolaemic pigs, exercise training significantly reduces the contractile response of left circumflex coronary artery to ET-1. This effect has been shown to be independent of PAVT (Bunker and Laughlin, 2010). Exercise training of Wistar rats for 8 weeks decreases plasma triglyceride levels with no effect on adipokine profiles. Exercise training up-regulates the expression of eNOS in PVAT but not in the aorta. Exercise reduces the amount of thoracic PVAT without changing the effects of PVAT on relaxation or constriction (Araujo et al., 2015). However, this study was performed in healthy rats on normal chow. Future studies should investigate the effect of exercise training on PVAT function in obesity models. Improving the function of PVATeNOS A major mechanism for the PVAT eNOS dysfunction in experimental obesity is eNOS uncoupling due to a deficiency in L-arginine, which is attributable to an up-regulation of arginase found in PVAT of diet-induced obese mice (Xia et al., 2016). Arginases are major L-arginine-consuming enzymes that metabolize L-arginine to urea and L-ornithine. The up-regulation of arginase limits L-arginine bioavailability for NO production and leads to eNOS uncoupling (Yang and Ming, 2013). An uncoupled eNOS produces superoxide at the expense of NO and thus contributes to oxidative stress in PVAT (Xia et al., 2016). The molecular mechanisms responsible for the increase in arginase and the subsequent L-arginine deficiency observed in diet-induced obesity remain to be determined. We propose that PVAT inflammation plays an important role in this effect. The expression/activity of vascular arginases can be enhanced by a variety of stimuli (Pernow and Jung, 2013), including Ang II (Shatanawi et al., 2011), high glucose (Romero et al., 2008), thrombin (Ming et al., 2004) and oxidized low-density lipoprotein (Ryoo et al., 2006), conditions that inevitably lead to vascular inflammation. To ascertain that eNOS uncoupling is causally involved in obesity-induced vascular dysfunction, we incubated PVATcontaining aorta with a combination of L-arginine and an arginase inhibitor for 30 min ex vivo in organ chambers. This treatment had no effect on acetylcholine-induced vasodilatation in aorta from control mice, but restored the vasodilator effects of PVAT-containing aortas from HFD-fed mice, indicating that L-arginine deficiency is indeed a reason for PVAT eNOS dysfunction (Xia et al., 2016). Another mechanism for PVAT eNOS dysfunction is the reduced phosphorylation of eNOS at the serine 1177 residue associated with Akt inhibition (one of the upstream kinases for serine 1177 phosphorylation) (Xia et al., 2016). In a recent study, we treated diet-induced obese mice with the standardized Crataegus extract WS® 1442, which is known to enhance eNOS phosphorylation at the serine 1177 residue by stimulating Akt activity. The in vivo treatment with WS® 1442 improved the phosphorylation levels of Akt and eNOS, and completely normalized the vasodilator response of PVATcontaining aorta from diet-induced obese mice, confirming the functional importance of the Akt-eNOS axis in obesityinduced PVAT dysfunction (Xia N et al. unpublished data 2016). In addition, obesity has been shown to reduce eNOS expression in mesenteric PVAT of diet-induced obese rats (Bussey et al., 2016) and mice (Gil-Ortega et al., 2014). Sustained weight loss in rats restores eNOS expression and improves PVAT NO production (Bussey et al., 2016). In severely obese individuals, bariatric surgery improves NO bioavailability in PVAT of small subcutaneous arteries (Aghamohammadzadeh et al., 2013). Enhancing PVAT H 2 S production In a rat model of obesity and metabolic syndrome (cafeteria diet, fed for 3 months), treatment with the PPARγ agonist rosiglitazone increases insulin sensitivity, reduces fasting insulin levels and triglyceride concentration, increases CSE expression and activity as well as PVAT H 2 S production, and improves the anticontractile effect of PVAT on aortic rings (Beltowski, 2013). After treatment of rats for 3 weeks with lipophilic atorvastatin (20 mg·kg -1 ·day À1 ), but not hydrophilic pravastatin (40 mg·kg À1 ·day À1 ), the PVAT H 2 S levels were increased as its mitochondrial oxidation was inhibited, and the anticontractile effect of PVAT was augmented (Wojcicka et al., 2011). Inhibition of H 2 S metabolism results from the atorvastatin-induced decrease in coenzyme Q, which is a cofactor of H 2 S oxidation by sulfide:quinone oxidoreductase. In contrast to H 2 S, statins do not impair mitochondrial oxidation of organic substrates (Beltowski and Jamroz-Wisniewska, 2012). The effect of atorvastatin on PVAT H 2 S levels is independent of the lipid-lowing action of the drug because both statins at the doses used had comparable effects on the plasma lipid profile. Hydrophilic statins act mainly in the liver whereas lipophilic statins are equally active in hepatocytes and extrahepatic tissues (Shitara and Sugiyama, 2006) and are supposed to accumulate in triglyceride-rich PVAT (Beltowski, 2013). The effect of statins on PVAT function has not yet been verified in obesity models. Atorvastatin has been shown to improve PVAT function in spontaneously hypertensive rats (Zeng et al., 2009). It is not yet clear whether these effects are mediated by H 2 S. In addition, cannabinoid CB 1 receptor agonists elevate PVAT H 2 S by inhibiting its mitochondrial oxidation (Beltowski, 2013). The imidazoline I 1 receptor agonist moxonidine has been shown to increase myocardial CSE expression and H 2 S production in streptozotocin-induce diabetic rats (El-Sayed et al., 2016). Sulfhydrylated ACE inhibitors may be of particular interest because of their dual properties: inhibition of ACE and potentiation of the H 2 S pathway in vivo, as shown for S-zofenopril in spontaneous hypertensive rats (Bucci et al., 2014). Inhibition of the renin-angiotensin-aldosterone system In wire myograph experiments with rat mesenteric small arteries, the in vitro creation of a hypoxic environment causes the loss of PVAT's anticontractile function. This hypoxiainduced loss of PVAT's anticontractile effect can be prevented by incubation with the ACE inhibitor captopril or the AT 1 receptor antagonist termisartan (Rosei et al., 2015), or with the aldosterone antagonist eplerenone (Withers et al., 2011), whereas the β-blocker atenolol was without effect (Rosei et al., 2015). Mechanistically, the renin-angiotensin-aldosterone system (RAAS) is thought to play a key role in the exacerbation of the hypoxia-induced inflammation and oxidative stress. Hypoxia stimulates ROS production and may activate RAAS by up-regulating ACE expression and inhibiting ACE2 activity . Ang II and aldosterone enhance oxidative stress, macrophage infiltration and inflammation (Withers et al., 2011;Rosei et al., 2015). Both Ang II and mitoROS are potent activators of HIF-1α (Patten et al., 2010). In vivo treatment with the AT 1 receptor antagonist losartan improves PVAT function in fructose-induced hypertensive rats (Huang et al., 2010). The aldosterone antagonist spironolactone has also been shown to restore PVAT function in a rat model of β-adrenoceptor overstimulation (Victorio et al., 2016a). Nevertheless, the effect of RAAS inhibitors on PVAT function has not been tested in obesity models in vivo so far. Anti-inflammatory strategies Chronic administration of melatonin has been shown to reduce body weight, circulating insulin, glucose and triglyceride serum levels in HFD-fed rats (Prunet-Marcassus et al., 2003;Rios-Lugo et al., 2010). In the leptin-deficient ob/ob mice, the anticontractile function of mesenteric PVAT is reduced (Agabiti- Rosei et al., 2014). Treatment with melatonin in the drinking water for 8 weeks improves PVAT function. Importantly, melatonin improves vascular function only in the presence of PVAT, indicating the importance of PVAT in the vascular dysfunction observed in ob/ob mice (Agabiti- Rosei et al., 2014). The improvement of PVAT function by melatonin is likely to be attributable to its antioxidant/antiinflammatory properties. Melatonin reduces the expression of ET-1, IL-6 and metalloproteases 2 and 9 in the aorta, decreases TNF-α and CD68 levels in visceral fat and increases the expression of adiponectin and adiponectin receptor 1 in PVAT of ob/ob mice (Agabiti- Rosei et al., 2014). Cytokine antagonists also represent attractive antiinflammatory approaches. Ex vivo incubation with the anti-TNF-α antibody infliximab has been shown to improve PVAT function and PVAT NO production in small arteries isolated from obese patients (Virdis et al., 2015). The effect of in vivo anti-TNF-α therapy on PVAT function has not been reported, so far. Fructose-feeding in rats leads to a dysregulation of adipokine/cytokine expression in plasma and PVAT, accompanied by a vascular dysfunction. Oral administration of fructose-fed rats with resveratrol and metformin improves adipokine/cytokine profiles, enhances AMPK activity and SIRT1 expression in PVAT, and restores PVAT eNOS phosphorylation and acetylcholine-induced vasodilatation (Sun et al., 2014). Importantly, in vitro treatment of aortic rings from normal rats with conditioned media of PVAT from fructose-fed rats reduces acetylcholine-induced vasodilatation. This inhibition of vascular function is reversed by conditioned media of PVAT from resveratrol-or metformintreated rats (Sun et al., 2014), indicating the key role of PVAT in the improvement of vascular function by the two compounds. In HFD-fed rats, oral administration of diosgenin or resveratrol normalizes PVAT size, restores PVAT expression of TNF-α, IL-6, MCP-1, adiponectin, PPARγ and PVAT eNOS phosphorylation. Diosgenin and resveratrol also restores the effect of PVAT on vascular function (Chen et al., 2016). In contrast to these stimulating effects of diosgenin or resveratrol, another natural compound, genistein, showed no beneficial effects on PVAT despite preventing weight gain in obese ob/ob mice (Simperova et al., 2016). Adiponectin releasers/expression enhancers A recent meta-analysis demonstrates a significant increase in plasma adiponectin levels following statin therapy (Chrusciel et al., 2016). Adiponectin release from PVAT can be enhanced by stimulating the eNOS-NO-cGMP-PKG pathway Withers et al., 2014b). Thus, the mechanisms of statin-induced adiponectin secretion may involve vascular NO production (Forstermann and Li, 2011;Li and Forstermann, 2014). Theoretically, PVAT adiponectin release may also be stimulated by activators of eNOS, sGC or PKG. As PPARγ plays a role in governing adiponectin expression (Antonopoulos et al., 2015b), PPARγ agonists (e.g. glitazones) and AMPK activators may enhance the expression of adiponectin in vivo, in addition to their effects on PVAT H 2 S production (see above). However, these concepts remain speculations at this stage. The effect of such approaches on adiponectin expression/release and PVAT function need to be investigated in future studies. Conclusion PVAT plays a crucial role in obesity-induced vascular dysfunction. Hypoxia, inflammation and oxidative stress in PVAT lead to an impairment in the release of vasoactive factors from PVAT, and the normal anticontractile function of PVAT is lost in obesity. PVAT may, therefore, represent a new therapeutic target for vascular complications in obesity. Various approaches have been shown to improve PVAT function; however, their therapeutic potential needs to be further verified in obesity models.
“Competitiveness and complementarity of agricultural products between Thailand and China on a short-term basis” China and Thailand belong to Regional Comprehensive Economic Partnership coun- tries, and agricultural trade is vital to Thailand’s economy. Competition in agricultural trade between countries is fierce. Therefore, it is crucial to understand the advantages and disadvantages of agricultural trade between Thailand and China. Complementarity and competitiveness of international business show the benefits and drawbacks of cross-border exports and the trend of future exports. This study uses quantitative tech-niques to analyze the agricultural trade between Thailand and China. It employed four methods, including the calculations of the Grubel-Lloyd index, revealed comparative advantage index (RCA), trade intensity index (TII), and trade complementarity index (TCI). The result of method 1 indicates that Thailand’s agricultural trade has a more substantial competitive advantage (three years average RCA = 1.69 > 1.25) than China (three years average RCA = 0.37 < 0.8) from 2017 to 2019; they are complementary in specific categories of agricultural products. The result of method 2 indicates that items 03, 07, 13, and 14 of China’s exports and Thailand’s imports have strong complementarity. Items 10, 11, 17, and 19 of Thailand’s exports and China’s imports have strong complementarity. The result of method 3 indicates that the positive factor on bilateral trade flow is significant. The result of method 4 indicates that items 06, 07, 12, 19, 20, and 21 have advantages in intra-industry trade, and items 09, 10, 13, and 18 have advantages in inter-industry trade. The paper has important implications for Thailand’s government to formulate relevant trade policies to enhance its agricultural export competitiveness, which is also conducive to developing bilateral agricultural trade. INTRODUCTION Thailand is a member of WTO, APEC, and ASEAN and has had an increasing economic development from 2010 to 2019. China, the largest GDP entity in Asia and the world's second-largest GDP entity in 2021, is a vital trade partner to Thailand. From 2010-2020, the import contribution rate of total imports from China to the world was 27%, more than the USA and EU, which increased from 12.1% in the period from 2001 to 2009 (Wei, 2021). It has also been the largest trade partner of Thailand for eight consecutive years. In 2021, the total trade between China and Thailand had increased to 131 billion US dollars (General Administration of Customs of PRC, 2022b), which made China the top trade partner of Thailand. Due to the impacts of deep cooperation with ASEAN countries and the "One Belt, One Road" initiative, ASEAN became the leading trade partner to China, whose total trade was 5.67 trillion RMB (equal to 878.2 billion USD) in 2021 (General Administration of Customs of PRC, 2022a). In 2019, China's total exports of agricultural products amounted to 79.1 billion US dollars, imports reached 150.9 billion US dollars, and a trade deficit was 71.8 billion US dollars (its growth rate is 26.5% annually) (Ministry of Agriculture and Rural Affairs of PRC, 2020). In addition, China's "One Belt, One Road" initiative in 2013 also intensified the trade with Thailand and other ASEAN countries; the total trade volume between China and ASEAN stood at 43.9% of the "One Belt, One Road" initiative total trade in 2014 (Zou et al., 2015). Furthermore, based on the World Bank Database, Thailand and China had a stable increase in GDP from 2010-2019, and their total exports to each other increased stably (see . RCEP agreement implemented in 2022 also helped Thailand to promote its exports to China. Therefore, one crucial issue is how to help Thailand's government and related third parties find a scientific path to improve agricultural exports to China. Economic integration and global trade could offer better resource use and support domestic producers in their efforts to enter larger markets, paying attention to their countries' comparative advantages (Mayes, 1978). After an accurate analysis of the agricultural trade, Thailand's government could use more targeted strategies to improve agricultural exports. The issue is identifying the advantages and disadvantages of agricultural trade based on a rational methodology between Thailand and China. LITERATURE REVIEW Following the traditional economic theory, competitiveness aligns with the comparative advantage of Ricardo (1821) and the absolute advantage of Smith (1776). Scholars have offered many approaches to analyzing trade effects in one sector under the traditional economic theory. An example of such approaches is the revealed comparative advantage index of Balassa (1965). However, due to the limit of the measurement method, the result could only be effective on specific hypotheses. Strategies such as economic integration could be applied based on the outcome of these methods. Economic integration might effectively promote regional trade and national competitiveness (Petrović et al., 2008). Different countries can benefit from mutual economic integration (Rivera-Batiz & Romer, 1991). Balassa (1965) proposed the empirical method of comparative advantage in a commodity. This method compares one country's export performance with a specific group of countries' export performance in the same industry. Therefore, economic integration could be applied after the analysis result is achieved, which means that the agricultural data analysis is the foundation of governance. In order to analyze the agricultural data based on an efficient method, the study conducted experiments verifying procedures from the literature. Zhou and Zhan (2009) analyzed China-Thailand's agricultural trade based on the trade data from 1996 to 2006. It was found that the vertical structure was the primary trade structure in the trade area. They researched the data using the Grubel-Lloyd index, Bruelhart Index method, and Thom & McDowell Index to analyze the agricultural trade data and discovered the potential relationship between different factors. The study proposed strategies for producing agricultural products based on geographical growing categories to improve China and Thailand's agricultural trade through empirical quantitative analysis. Analysis of the China and ASEAN countries' trade offered this study significant guidance in various directions. First, Xie and Yue (2011) researched the effects of trade facilitation on trade flows between China and ASEAN countries. They analyzed the impact based on data on port efficiency, custom environment, and policy environment. It was found that GDP and Free Trade Zone benefited trade flow, and the trade facilitation promoted trade flow (Xie & Yue, 2011). Second, Pöyhönen (1963) and Tinbergen (1962) researched the determinants of trade flows. Finally, Ekanayake et al. (2010) analyzed Asia's regional trade agreements and their impact on intra-regional trade flows. They employed the gravity model with 1980-2009 trade data. It was found that transportation and other distance-related costs are important deter-minants of trade flows (Ekanayake et al., 2010). Therefore, distances could be one crucial factor in analyzing trade data. Due to the nearby geographical advantages, analyzing trade between Thailand and China is a meaningful sub-case. However, the specified agricultural trade case is only one variable case, and this paper will not consider geographical factors. Other research also identified the significance of trade effects between Thailand and China. For example, Nidhiprabha (2019) analyzed the effects of the China-USA trade war on Thailand's exports using the vector autoregressive model. It was indicated that the escalating trade dispute negatively influences Thailand's exports. In addition, the slowdown of the Chinese economy constrains and reduces Thailand's exports (Nidhiprabha, 2019). Thus, Thailand and China have a very close relationship in trade. Zhao and Lin (2008) researched the trade relationship of agricultural products between 10 ASEAN countries and China. They applied the trade gravity model to analyze the 2000-2006 agricultural trade data. The study concluded that the trade flows relied on the economic scale, population, trading policy, and the distances between capital cities (Zhao & Lin, 2008). Therefore, the most crucial factors were the trading policy and economic scale. Therefore, China and ASEAN countries have more potential opportunities in trading agricultural products. Zhang and Yu (2018) researched traffic infrastructure, side reactions, and bilateral trade effects based on the countries covered by the "One Belt, One Road" initiative. The study found that although the improvement of the transportation infrastructure of the importing country is more conducive to the export trade of neighboring countries, it has not affected the increase of China's export share in the importing country. Instead, the transportation infrastructure of the importing country and its neighboring countries promoted China's export trade, and the degree of this impact has increased over time. It was shown that the higher the degree of trade facilitation, the more it could promote an increase in exports (Fang & Zhu, 2013). Yang and Wang (2005) analyzed the complementary of China-Russia cross-board trade. They found that the complementary effect was high, and the potential opportunity was highly based on the model of revealed comparative advantage (Balassa, 1965). They found that China and Russia had a strong relationship based on high TCD (as a trade intensity index) from 1992 to 1993 due to Russian policy effects. From 1994 to 2003, they discovered that the TCD from Russia to China had different trends but still had high TCD compared to other countries. Since Russia and Thailand are geographically nearby countries to China, the model could also be conducted in this study. Hoang (2020) analyzed ASEAN agricultural competitiveness using RCA, RTA, and NRCA as objectives. He discovered that ASEAN countries show significant competitiveness in crustaceans, rice, rubber, spices, vegetable fat and oils, wood, fish, and fuel wood. Thailand, Vietnam, and Indonesia are the most competitive, while Singapore, Brunei, and Cambodia are the weakest countries (Hoang, 2020). Hoang (2020) also discussed the effectiveness of the RCA model and defined weak, medium, and strong competitiveness levels. Kuang and Tang (2012) analyzed the complementarity and competitiveness between China and Thailand's agricultural products based on data from 2000 to 2010. They aimed to assess the trade structure and find the relationships. The study analyzed both the whole structure and the separate structures and found that China had low competitiveness overall but still showed competitiveness in exporting some agricultural products. Thailand's export to China (Billion US dollars) The study also found whether there was a significant change in the trade factors between the two countries on a time base (Kuang & Tang, 2012). Based on the studies of agricultural trade, the research models proved to be effective in analyzing related agricultural trade factors. The paper aimed to use the models to analyze the advantages and disadvantages of agricultural trade between Thailand and China on a competitive and complementary scale. METHODOLOGY The study analyzes agricultural exports and imports trade data between Thailand and China. Moreover, it determines the advantages and disadvantages of agricultural trade on a competitive and complementary scale. The theoretical assumption is that the short-term data (2017-2019) could reflect the latest trade trend in the short run, and long-term data might bring more uncertainty. The analysis was limited to three years because Thailand suffered a severe drought in 2016, and after 2019, the Covid-19 pandemic adversely affected world trade. Long-term data might bring more errors and deviations. Therefore, the data before 2017 and after 2019 should be eliminated from the experimental analysis. The paper analyzed the data using comparative advantage theory, complementary trade theory, trade intensity approach, and the Grubel-Lloyd index method. The study used 2017, 2018, and 2019 agricultural products import and export trade data coded by the HS12 classification ( Method 1 Due to the differences in agricultural technology and industry structure between China and Thailand, they have comparative advantages in agricultural exports. In order to analyze the comparative advantages of their agricultural exports, this paper employs the revealed comparative advantage (RCA) index (Balassa, 1965): where RCA i k is a revealed comparative advantage index, RCA i k ≥ 2.5 means that the export k commodity in country i has an extreme competitive advantage; 1.25 ≤ RCA i k < 2.5 means that the export k commodity in country i has a strong competitive advantage; 0.8 ≤ RCA i k < 1.25 means the export k commodity in country i has a medium competitive advantage; RCA i k < 0.8 means that the k commodity in country i has a weak competitive advantage. X i k and X ω k are the export value of k commodities from country i to the world and the total export value of all k commodities to the world. X i t and X ω t are the value of total exports from country i to the world and the total export value of all commodities on the world market. Zhang (2021) implemented this method in analyzing the agricultural product relationship between Method 2 Method 2 measures China and Thailand's agricultural trade complementarity based on the TCI index. The trade complementarity index (TCI) is employed to determine the correlation between the export of a particular product in one country and the import of that product in another country, which can reflect the corresponding complementary relationship between one country's exports and another country's imports. It is calculated by: where RCA i k is a comparative advantage index of i country exports of k commodity, X i k and X ω k are the export value of k commodities from country i to the world, and the total export value of all k commodities to the world. X i t and X ω t are the value of total exports from country i to the world and the total export value of all commodities on the world market. rca i k is a competitive disadvantage index of j country imports of k commodity. y j k and y w k are the trade value of all k commodity imported to j country and the trade value of all k commodities imported to the world. y j t and y w t are the total trade value of all imports to j country and the total value of all imports to the world. In the method, the total value of world imports of A products equals the total value of world exports of A products. The more extensive TCI index means that the two countries have stronger complementarity. If the TCI index is bigger than 1, the two countries have strong complementarity; if it is lower than 1, the two countries have weak complementarity. Method 3 The next step is to use the Trade Intensity Index (TII) approach. TII analyzes the trade flow and evaluates the relationship between the countries. Kojima (1962) improved the trade intensity index method. After that, Drysdale (1967) improved procedures again and made two determinants: one is commodity bias, and the other one is special country bias. The special country bias includes the effect on international commerce from politics, geography, institutions, and history. Finally, Yang and Wang (2005) applied TII to analyze the trade dependence and relationship between China and Russia. According to Brown (1947) and Kojima (1962), it shows that the higher the TII (>1), the more positive factors will be in the bilateral trade between the countries (e.g., better free trade agreements, better geographical factors, and better comparative advantages). The formula is given by: where x ij is the value of country i's exports to country j, X it is the country i's total exports, x wt is the value of j's total imports, and X wt is the total world imports. An index of T ij > 1 indicates the positive factor on bilateral trade flow is significant, the bigger the number, the better the positive effect. While T ij < 1 indicates a positive factor on bilateral trade flow that is insignificant, the lower the number, the worse the trade relationship effect. In this method, the total value of world imports of A products equals the total value of world exports of A products. Method 4 Grubel and Lloyd (1971) implemented Grubel-Lloyd Index to analyze the intra-industry trade. Egger et al. (2007) researched multinational firms based on the Grubel-Lloyd measure to analyze intra-industry trade (IIT) and developed a new model to analyze focused areas, and this measurement was widely used. However, the study used the traditional method. Grubel-Lloyd index method was implemented to analyze intra-industry trade, and the formula is: where the X j is country A's export to country B, M j is country A's import from country B, and j is the targeted category of product industry. If GL j is near 1, they are developed in intra-industry trade; if GL j is near 0, they are developed in inter-industry trade. 14 Plaiting materials; other plant products. 15 Animal or vegetable fats and oils; refined edible oils and fats. 16 Meat, fish and other aquatic invertebrate products. 17 Sugar and confectionery. 18 Cocoa and cocoa products. 20 Products of vegetables, fruits, or other parts of plants. 23 Food industry residues and waste; formulated feed. 24 Tobacco and tobacco substitute products. RESULTS Due to the different factors between Thailand and China in agricultural trade, the experimental results effectively demonstrated their different trade characteristics. The results also proved the advantages and disadvantages of agricultural trade on different scales in their specific situations. Table 2 shows that China and Thailand have comparative advantages in exporting agricultural products. China has a total RCA index lower than 0.8 in three years, which means that it has a weak competitive advantage in exporting agricultural products. On the other hand, Thailand has a very high total RCA Index, which is higher than 1.25, meaning a strong comparative advantage in exporting agricultural products overall. Therefore, Thailand has a much higher RCA index than China, and this means that Thailand has a more substantial advantage in agricultural export than China. However, there are differences in categories of items with competitive advantages between the two countries; China has weak comparative advantages in exporting agricultural products overall, but it has some categories with comparable advantages that Thailand does not have. The results also show they have a complementary advantage in exporting specific agricultural products. Table 3 shows that considering specific types of agricultural products, China's exports of agricultural products and Thailand's imports of agricultural products are strongly complementary, mainly in items 03 (Fish and other The results show that Thailand and China have specific complementary categories of agricultural products in one country's export and another's imports. Therefore, they could cooperate further to enforce their advantages on both sides. Table 4 shows that from 2017 to 2019, the intensity index from both export directions between China and Thailand is above 1, which means that the positive factor on bilateral trade flow is significant. In this case, the higher the intensity index means the two countries have more significant advantages in trading from better factors such as comparative advantage structures, better free trade agreements, and better geographical nearby locations (Zhang & Tang, 2017). Therefore, the two countries could cooperate with agricultural trade on a vast political and economic scale. The study examined 24 different HS12 categories of agricultural products between Thailand and China from 2017-2019 and got the result of the GL index. Note: Data of Thailand export to China item 24 in 2018, and China export to Thailand item 02 in 2017 and 2018 could not be achieved, "-" means the data could not be calculated, the 0.00 number means that the number is minimal but still positive. DISCUSSION The findings indicated that Thailand and China have specific comparative advantages in agricultural exports in different agricultural product categories. Thailand (RCA = 1.69) has a more substantial comparative advantage than China (RCA = 0.37) in exporting agricultural products overall, and differences in figures show that Thailand has a more substantial agricultural export advantage. TII (> 1) shows significant positive factors in the trade between the two countries. It also shows the better value of trade than expected, considering the importance of world trade. Therefore, it is vital to strengthen the cooperation in bilateral trade of agricultural products based on the TII figures. contains the factors of one country's imports from another country instead of from the world. The study does not recommend their method in this case since the analysis focused on the assumption that the RCA i k and rca j k should have comparable levels on the world base. The intra-industry and inter-industry effects of the various categories of agricultural items in the study would be critical elements in assessing the scale of agricultural product industrialization. Items 06, 07, 12, 19, 20, and 21 have advantages in intra-industry trade, and items 09, 10, 13, and 18 have advantages in inter-industry trade. These different types of agricultural exports show the characteristics of agricultural industrialization and its natural agricultural advantages. The other issue is how to set up policies according to the result of experiments. New free trade agreement policies negotiated and designed could be one way to build a more open trade environment, and it might speed up the intra-industry trade flows and enhance trade diversity. Adjusting foreign exchange policy could help agricultural exports but could lead to further trade conflicts. Franke (1991) concluded that increased exchange rate volatility reduces international trade volumes because the firms are careful and cautious. The study found evidence that some of the consequences of an overvalued currency are to be compensated by trade policy, especially the trade policy about antidumping interventions (Nicita, 2013). Lamb (2000) found that the exchange rate in explaining export crop production, food, and aggregate agricultural supply has a significant position, and it might be a proxy for excluded macroeconomic variables. Therefore, using more open trade policies to promote agricultural exports is much better than using foreign exchange policies alone. Thailand's government could use financial tools to control the exchange rate fluctuation since the price change might indirectly affect the local farmers' revenue. A targeted subsidy policy could also remedy the local producers in special agricultural categories to help them find a more efficient way to export. While from the outcome of the experiment, Thailand could design specific export strategies to balance its future agricultural development by segmenting the advantages and disadvantages of agricultural trade. CONCLUSION The paper assessed the complementarity and competitiveness of agricultural exports between Thailand and China. It is found that both countries have advantages in exporting different agricultural products. Thailand has better competitive advantages than China overall, and they both have complementarity in particular categories of agricultural products. The two countries have potential advantages in cooperation in agricultural trade. The higher trade intensity index indicated that the two countries have more significant advantages in trading from better factors such as comparative advantage structures, better free trade agreements, and better geographical locations. It also found differences in the advantages of inter-and intra-industry trade for different agricultural products. The advantages could also be enhanced by implementing trade policies by adjusting domestic industrial policies. Inter-or intra-industry trade can also be converted mutually, regardless of the natural factors. However, this study has its limitations in sampling; three years' data analysis could only be used on a short-run base, but the long-period data might bring unpredictable errors in this case.
A new osteichthyan from the late Silurian of Yunnan, China Our understanding of early gnathostome evolution has been hampered by a generally scant fossil record beyond the Devonian. Recent discoveries from the late Silurian Xiaoxiang Fauna of Yunnan, China, have yielded significant new information, including the earliest articulated osteichthyan fossils from the Ludlow-aged Kuanti Formation. Here we describe the partial postcranium of a new primitive bony fish from the Kuanti Formation that represents the second known taxon of pre-Devonian osteichthyan revealing articulated remains. The new form, Sparalepis tingi gen. et sp. nov., displays similarities with Guiyu and Psarolepis, including a spine-bearing pectoral girdle and a placoderm-like dermal pelvic girdle, a structure only recently identified in early osteichthyans. The squamation with particularly thick rhombic scales shares an overall morphological similarity to that of Psarolepis. However, the anterior flank scales of Sparalepis possess an unusual interlocking system of ventral bulges embraced by dorsal concavities on the outer surfaces. A phylogenetic analysis resolves Sparalepis within a previously recovered cluster of stem-sarcopterygians including Guiyu, Psarolepis and Achoania. The high diversity of osteichthyans from the Ludlow of Yunnan strongly contrasts with other Silurian vertebrate assemblages, suggesting that the South China block may have been an early center of diversification for early gnathostomes, well before the advent of the Devonian “Age of Fishes”. Introduction Osteichthyans, comprising the Actinopterygii (ray-finned fishes) and Sarcopterygii (lobefinned fishes and tetrapods), display a remarkable diversity with over 60,000 extant species. However, their origins and early evolution are obscured by the rarity and typically fragmentary nature of pre-Devonian fossil material [1,2,3]. The anatomy of these newly discovered Silurian fishes has narrowed the morphological gap between basal sarcopterygians and actinopterygians [8], as well as between early osteichthyans and placoderms [8,12,13]. The unambiguous presence of dermal pelvic girdles in Guiyu facilitated the identification of isolated plate-like bones from the Early Devonian Xitun Formation, Yunnan, as the pelvic girdles of the enigmatic osteichthyan Psarolepis romeri [15]. Prior to these discoveries, pelvic girdles with extensive dermal components were thought to be restricted to extinct gnathostome groups, specifically the pelvic spines of acanthodians and the fully developed girdles of placoderms [15,16]. The phylogenetic positions of Guiyu, Psarolepis and other Silurian-Early Devonian osteichthyans are currently under scrutiny. Guiyu and Psarolepis are usually resolved as stem sarcopterygians [8,15,17,18,19,20,21]. Recent analyses incorporating these taxa plus Achoania, another Xitun form [20], resolve a cluster of these three genera as the sister group to the remaining sarcopterygians [8,12]. However these fish manifest combinations of features found in sarcopterygians, actinopterygians (cheek and opercular-gular bone configuration, scale articulation) and non-osteichthyans (median dorsal plates, morphology of appendicular girdles, absence of tooth enamel). Such incongruent character distribution has led to suggestions of an alternative phylogenetic position for these forms as stem-group osteichthyans [15,20,21,22]. A well preserved fossil osteichthyan, consisting of a partial postcranium (Fig 2), was collected from the Kuanti Formation on the outskirts of Qujing in 2009. It is distinct from other early osteichthyans due to its unusual scale morphology and dermal ornamentation, although it shares several features with Guiyu and Psarolepis, including spine-bearing dermal pelvic and pectoral girdles, characters also present in placoderms [15]. This new form represents the second known Silurian bony fish with an unambiguously associated dermal pelvic girdle, and its new character combination adds significantly to our understanding of early osteichthyan anatomy and diversity. Holotype and only specimen. V17915, a partial postcranium (Fig 2), with associated cleithrum, interclavicle and pelvic girdle. Institute of Vertebrate Paleontology and Paleoanthropology (IVPP), Beijing. Systematic paleontology Diagnosis. Bony fish with spine-bearing dermal pectoral and pelvic girdles. Large median dorsal plates, with those immediately anterior to each the two dorsal fins bearing a large spine. Dermal surfaces of the scales and bony plates composed of glossy enamel ornamented with coarse sub-parallel ridges. Large surface pore openings within inter-ridge furrows on the appendicular girdles, gulars and median dorsal plates, but absent on the scales. Thick rhombic scales with a distinct neck separating the crown and the base. Anterior flank scales with a dermal interlocking mechanism of ventral bulges embraced by dorsal concavities on the ventrally adjacent scales. About 30 scale columns in front of the first dorsal fin base. Etymology. Generic name from the Persian spara (shield) and the ancient Greek lepis (scale), in reference to the resemblance of the scales of the fish to depictions of rectangular wicker shields carried by the Achaemenid Sparabara infantry. Specific name after V. K. Ting (1887Ting ( -1936 for his pioneering work on the geology of Yunnan [4]. Notes: The presence of spine-bearing dermal pectoral and pelvic girdles separates Sparalepis from all other known osteichthyans except for Guiyu and Psarolepis [15]. The combination of prominent linear ridges and pore openings on the dermal surfaces of all the larger bones and ridge scutes distinguishes Sparalepis from Guiyu which possesses ridges only. The scale ornament, consisting of linear ridges and devoid of pores, is similar to that of Guiyu [8] and many early actinopterygians [23,24,25,26], but distinct from the porous cosmine-like surface on the scales of Psarolepis [27]. The scales of Sparalepis are smaller than those of Guiyu, with about 30 scale rows anterior to the first dorsal fin against 15 in Guiyu. As with Psarolepis, the flank scales lack extensive depressed fields and possess necks which separate the crowns from the bases [27]. The ventral bulges and dorsal concavities on the outer surfaces of the anterior flank scales of Sparalepis form a unique interlocking system among early osteichthyans. Description The holotype comprises an articulated partial postcranium in flattened dorsal view, from near the base of the skull to about 2 cm beyond the base of the 2 nd dorsal fin spine, an interclavicle with an associated ventral scute, a left cleithrum and a right dermal pelvic girdle (Figs 2 and 3). Assuming limited displacement of the posterior section of scales, which is folded perpendicular to the rest of the specimen, the preserved section accounts for a body length of roughly 11cm, suggesting a complete fish of a little over 20 cm. Paired-fin girdles. A left cleithrum (cle) and right dermal pelvic girdle (dpg), both displaced from their original positions, lie adjacent to one another. No endoskeletal structures are visible. The cleithrum is incompletely exposed, with the dorsal apex of the postbranchial lamina concealed by matrix and the overlying pelvic girdle (Fig 3C) while the anterior margin of the ventro-lateral lamina has been worn away. The visible area presents a broad, flat ventro-lateral section and a taller lateral section that tapers dorsally. The two sections meet at an angle of about 60 degrees at the level of a prominent pectoral spine (sp). The total length of the flattened ventrolateral margin (including the spine) is 25.5 mm. Ornament consists primarily of linear ridges with those on the spine being particularly elongate and thick. Sparse pores are present in the furrows between the ridges, particularly on the ventral surface. Ornamentation on the postbranchial lamina on the anterior half of the lateral section consists of closely-set denticulate tubercles, a feature also present in Psarolepis [27,28] and placoderms [29]. The right dermal pelvic girdle (Figs 3B and 4A), as preserved in ventral view, is a sub-rectangular plate (12.9 mm along the lateral margin, including the spine) with ornament ranging from long linear ridges near the posterolateral margin to short ridges with interspersed pores over the remainder portion. The exposed lamina presents a gently curved anterior margin, a straight median margin and a slanted posterior margin that tapers posterolaterally towards a short cylindrical pelvic spine (pf.sp) as in Psarolepis, but differs from Guiyu which has a larger, more elongate spine [15]. A thickened lateral rim curves dorsally to meet a semi-exposed lateral lamina, and posteriorly forms a cylindrical strut resembling a slender version of the proximal portion of the pectoral fin spine. Two elongate dermal elements lying adjacent to the displaced girdles are tentatively identified as an interclavicle (icl) and the left element of a pair of ventral scutes (Figs 1 and 3B). Alternatively, they may represent a median gular and left lateral gular, but this is unlikely owing to their proximity to the pectoral girdle and the absence of any other opercular-gular bones (submandibular or branchiostegal elements) in the preserved portion of the holotype. The interclavicle is a kite-shaped bone with a rostrocaudal length of 25.2 mm. The shorter anterolateral margins meet at a sharp anterior apex while the longer posterolateral edges, with slightly concave contour matching the adjoining ventral scute, taper gently towards a narrow, squared off posterior end. Median dorsal and ventral plates. As in Guiyu [8,15], Sparalepis bears a single median row of large dorsal plates or ridge scutes, with large spines present on the plates immediately anterior to each of the two dorsal fins (Figs 1 and 3A). The dermal surface is reticulated with wavy ridges interspersed with numerous pores, contrasting sharply with the adjacent pore-less scales. A series of three plates (md1-md3) commences immediately behind the presumed location of the base of the skull. The anterior two are uniformly square shaped at about 8 mm in length, while the third is elongated (17.5 mm long at the base) and bears a large (30 mm long) dorsal spine (dfs1). A similar configuration exists in Guiyu, although the second plate of Guiyu has a concave posterior margin that embraces the spine-bearing third plate [15] whereas the margin in Sparalepis appears straight. As noted in Ref. 15, the original description of Guiyu erroneously combines the first and second plates due to uncertainty over natural margins versus artifacts of preservation [8,15]. The presence of a first dorsal fin is inferred due to a posterior groove in the spine on the third plate followed by 17 mm gap in the median dorsal series, likely demarcating the basal extent of the fin web. A first dorsal fin was absent in the initial reconstruction of Guiyu [8], but is now believed to have been present due to similar reasoning [15]. Following the location of the presumed 1 st dorsal fin are a 4.5 mm long median dorsal scute (mds), only marginally larger than the adjacent scales, followed much larger 18 mm long plate (md4). Disruption of the median dorsal surface makes it unclear if additional plates were present leading up to the 2 nd dorsal fin spine (dfs2). The spine is broad-based and broken distally. Sparalepis appears to have possessed paired ventral ridge scutes (vrs), as does Guiyu [15]. A lanceolate bone, 21.3 mm in length and posterolaterally adjacent to the interclavicle, is considered to represent the left example of the anterior-most pair of ventral scutes, with the corresponding margins of the interclavicle indicating the presence of a missing right scute. The convex medial margin of the scute bears a smooth overlap area for the median gular. Ornamentation comprises a reticulated arrangement of wavy ridges with numerous pores within the inter-ridge furrows. A displaced rhombic plate lies adjacent to the second dorsal fin spine. It features pore-less linear ornament running perpendicular to the long axis. It is considerably larger (2.5mm length x 5.5mm height) than the flank scales in the immediate vicinity, and is similar in form and ornament to the paired ventral ridge scutes of Guiyu preserved immediately posterior of the pelvic girdle [15]. Scales. Recognizing and documenting the considerable variation in scale morphology on a single fish is essential for future identification of isolated scales from conspecific or related forms [30,31,32,33]. In describing the squamation of palaeoniscoid-grade actinopterygians, Esin [30] devised a scheme where the postcranial surface was divided into nine zones of distinct scale morphology. This method has been successfully employed by others in the description of Devonian actinopterygians [23,24,31] and other early osteichthyans with a rhombic scale morphology [27,33]. Although the flank scales do not display as prominent a reduction in dorso-ventral height along the antero-posterior flanks (Areas A to C) as in early actinopterygians [23,24,25,31], Sparalepis displays a broadly similar pattern of scale-variability with distinct morphology present within the areas designated by Esin [30]. As a result, Esin's scheme will be used in this description (Fig 5A). Scales from the caudal fin and peduncle (Esin's Area D) are not preserved. About 30 diagonal scale columns lie in front of the first dorsal fin base, as distinct from 15 comparable scale columns in Guiyu (condition in Psarolepis is unknown). All scales exposed in visceral view (completely visible in the mid-posterior flank and partially visible on the anterior flank) display a thickened base, separated from the crown via a distinct neck (Fig 6) as in Psarolepis [27], Cheirolepis [34] and some acanthodians [27], but not in Guiyu. Dermal ornamentation on the crown consists of parallel rostro-caudally directed enamel ridges, usually separate but with variable degrees of anastomosing on larger scales (Fig 5). There are no pores, unlike the ornament on the appendicular girdles and dorsal ridge scutes. As in Psarolepis [27], the scales lack a bony depressed field with the crown almost fully covering the base in crown view (Fig 5D). Area A: Scales from the anterior flank (Figs 5B, 5C and 6C) are rectangular with a dorsoventral depth of up to 5 times the antero-posterior length (up to 1.5 mm long x 7.5 mm tall). The anterior margins are straight whereas the posterior margins of the crown are serrated by the terminal portions of the ridges. Where sections of the squamation are missing, negative impressions of the scale base reveal a well-developed keel (k) flanked by prominent ledges (l.a, l.p). An anterodorsal bulge (p, p.ad), likely homologous with the combined peg and anterodorsal process of the "subtype 1" scales of Psarolepis romeri [27], slots into a shallow ventral socket (s) of the dorsally adjacent scale. Each scale crown displays a prominent ventral bulge (vb) that slots into a pronounced concavity on the dorsal margin of the scale beneath it, effectively forming an interlocking dermal series of ventral 'pegs' and dorsal 'sockets'. Scales partly out of articulation reveal a smooth surface (oa.vb) to accommodate the ventral bulge of the scale above it (Fig 5C) and reveal considerable overlap of the scale crowns. The anterior scales of Sparalepis thus possess separate dorso-ventral interlocking surfaces on the bases and the crowns (Fig 6C). The dermal articulations terminate at the 12 th scale row which is considered to be the transition zone with area B. Area B: The scales on the mid-flank (Fig 5D) are of similar proportions to those of area A (up to 1.5mm long x 7mm tall) with the dorsoventral dimensions becoming gradually shorter down the length of the body. The crowns possess straight dorsal and ventral margins without the pronounced dorso-ventral interlocking surfaces of their anterior counterparts. Area C: Scales from the posterior flank (aft of the approximate level of the 2 nd dorsal fin spine) are only preserved in basal view (Fig 6A and 6B). They are rectangular with a height of about 4 times the length (up to 1.2 mm long x 4.5mm tall). The base is thickened with a prominent keel wedged between the anterior and posterior ledges. The dorsal pegs and the articulating ventral sockets are broad, low and gently rounded. The anterodorsal process is large and club-shaped, underlying (in visceral view) the dorsal-most surface of the posterior ledge of the scale in front (Fig 6A and 6B). Area E: Area E (Fig 5E) comprises the scales on the dorsal surface anterior of the 2 nd dorsal spine. At the transition zone with the flanks, the scales are rectangular with a height-length ratio of 3 to 3.5. There is a marked reduction in scale height with each ascending horizontal scale row. Scales adjacent to the median ridge scutes are rhombic with a roughly equal height to length ratio. Area F: Scales on the lateral margins of the belly area are rectangular and drastically shorter (dorsoventrally) than the adjacent Area A-B scales with a dorso-ventral depth of twice the antero-posterior length. They lack the pronounced dorsal concavities of the adjacent Area A scales. Ventral to this, the scales are smaller and rhombic in shape, with a depth of about half the length. Area G: Closely packed minute diamond-shaped scales, 1 mm in length and ornamented with one or two short ridges, are preserved immediately adjacent to the trailing edge of the 2 nd dorsal fin spine (Fig 6A). Additional scales in basal view are preserved in association with the Area C scales. The keel on these scales is considerably broader and more prominent than on the adjacent flank squamation while the peg and anterodorsal process are greatly reduced. Area H: Scales from the posterior dorsal surface (Fig 6A) are preserved in basal view. They are similar to the Area G scales in being diamond shaped, lacking prominent peg or anterodorsal process and in having a greatly enlarged keel. The presence of a distinctly palaeoniscoid-like arrangement of scale variability in non-actinopterygian osteichthyans like Sparalepis and Andreolepis [33] suggests that this type of squamation is plesiomorphic within crown-group Osteichthyes. Guiyu [8] displays a broadly similar configuration although the squamation of this taxon has yet to be described in detail. Phylogenetic results The discovery of Sparalepis provides a second taxon of Silurian osteichthyan, along with Guiyu, known from a substantial portion of the skeleton (Fig 7). To explore the phylogenetic position of Sparalepis, we conducted phylogenetic analyses using the dataset presented by Qiao et al. [35] with the addition of character 336: "Relationship of crown and base of isolated trunk scale: crown fully covering the base (0); crown sitting on the bony base, with an exposed depressed field overlapped by adjacent scale in articulation (1)". This dataset (S1 Text and S1 Dataset) was modified from that of Long et al. [36], which was in turn expanded and modified from previous analyses [8,37,38,39], and further expanded from Giles et al. [40], Brazeau and de Winter [41], and Lu et al. [42]. The character data entry and formatting were performed in Mesquite (version 2.5) [43]. All characters were treated as unordered and weighted equally, as in the earlier versions of this dataset. Two agnathan taxa (Galeaspida and Osteostraci) were set as the outgroup. The dataset was subjected to the parsimony analysis in TNT software package [44]. The analyses were run using a traditional search strategy, with default settings apart from the following: 10,000 maximum trees in memory and 1,000 replications. Bremer support and bootstrap values were calculated using TNT [44], with heuristic searches. Our analysis generated 2496 trees of 957 steps (CI = 0.3772; HI = 0.6228; RI = 0.8070; RCI = 0.3044), a strict consensus of which (Fig 8A) broadly agrees with the favoured hypothesis of Zhu et al. [12] with placoderms recovered as a paraphyletic array of stem gnathostomes, acanthodians as a paraphyletic array of stem chondrichthyans, and Entelognathus as the immediate sister group of crown gnathostomes. One most parsimonious tree (S1 Fig), which agrees well with the 50% majority-rule consensus (Fig 8B), is selected for illustrating inferred character transformations at various nodes. Sparalepis, is consistently clustered with Guiyu, Achoania and Psarolepis; they together are positioned at the base of the Sarcopterygii, collectively forming the sister group of crowngroup sarcopterygians. Previously described as a "Psarolepis-Guiyu cluster" [15], this putative clade is here informally referred to as the "psarolepids" for the sake of simplicity. Key synapomorphies uniting these taxa in this analysis (S1 Fig) include spines associated with the dorsal (character 122, state 1), pectoral (character 124, state 1), and pelvic fins (character 240, state 1); median dorsal plates (character 104, state 1); a dermal pelvic girdle (character 239, state 0) and an adsymphysial tooth whorl (character 41, state 1, their presence in Guiyu and Psarolepis is inferred on the basis of facets on the anterior dentaries of these taxa). These purported synapomorphies are problematic as all but the tooth whorls are present among the gnathostome stem-group (= placoderms). Additionally, adsymphysial tooth whorls are widely distributed among disparate branches of the gnathostome crown group (onychodonts, porolepiforms, the actinopterygian Howqualepis and the acanthodian Poracanthodes). In our analysis, large spines anterior to the appendicular and dorsal fins are lost in the stem-group Osteichthyes prior to the node containing Dialipina and crown-Osteichthyes. Median dorsal plates and plate-like dermal pelvic girdles are lost in the gnathostome stem prior to the node containing the osteicthyan and chondrichthyan total groups. Thus with the available data, the broad suite of placoderm-like characters present in the psarolepids are resolved as homoplasies. At present the psarolepid cluster contains the totality of Silurian crown-gnathostomes known from reasonably complete remains. Both Guiyu and Sparalepis are known from articulated specimens whereas the disarticulated remains of Psarolepis are sufficiently comparable to these forms to reconstruct a substantial proportion of its anatomy [15,27,28]. The other two Silurian genera, Lophosteus and Andreolepis, are highly fragmentary as is Meemannia from the Lockhovian [45], recently posited as a stem-actinopterygian [46]. The Early Devonian Dialipina is known from complete specimens [47] that display an unusual (for an osteichthyan) dermal configuration of numerous small cranial and gnathal plates, presenting difficulties in determining homologies with other bony fishes [25] and possibly suggesting some degree of anatomical specialization. This taxon remains enigmatic and demands further scrutiny [12]. The placoderm-like characters highlighted in the psarolepids are also absent among the basal actinopterygians based on the available fossil record, although the clade that is not represented by reasonably complete material from before the Middle Devonian. While the presence of Silurian sarcopterygians necessitates the existence of Silurian actinopterygians, indisputable pre-Devonian representatives are currently unknown [48]. Given the striking anatomical disparity of forms like Guiyu and Sparalepis when compared to Middle-Late Devonian sarcopterygians, their discovery and description has profoundly enhanced our understanding of early sarcopterygian anatomy [8,15]. This may allude to the scope of our limitations regarding early actinopterygian evolution without the benefit of similarly complete Silurian fossil representatives. An increasing body of evidence, including the strong morphological similarities of the paired-fin girdles and fin-spines between placoderms and psarolepids [8,15,28], similar dermal arrangements between the skulls of stem-gnathostomes Entelognathus [12,49], Qilinyu [13] and Janusiscus [40] with crown-group Osteichthyes, and placoderm-like features noted in the early osteichthyan Lophosteus [50] provide compelling support for a placoderm-like ancestral bauplan for the osteichthyan total-group. When combined with our currently limited knowledge of pre-Devonian osteichthyan diversity and anatomy, it raises the possibility that the putative synapomorphies uniting the psarolepids in this analysis may ultimately prove to represent osteichthyan symplesiomorphies with the benefit additional fossil data. As such, it remains unclear at present as to whether the psarolepids constitute a genuine clade. The Xiaoxiang Fauna as an indicator of an early center of osteichthyan diversification in South China Although the earliest record of putative gnathostomes may extend as far back as the Ordovician [51,52], until very recently the available fossil record of Silurian gnathostomes was scarce and highly fragmentary [1,2,3,10,16]. Prior to the discoveries of the exceptional Kuanti specimens [8,12,13,15], the only articulated fossil gnathostome dating from the Silurian was the acanthodian-like Yealepis from the Ludlow of Victoria, Australia, based on an incomplete postcranium [53]. Silurian ichthyofaunal assemblages in Eurasia and North America are dominated by 'ostracoderms', a paraphyletic grade of jawless fishes including heterostracans, thelodonts, galeaspids and osteostracans [16,54] with gnathostomes generally being poorly represented, although there has been a substantial increase in fossil evidence in recent decades [54]. Recent discoveries from South China and Vietnam suggest a greater diversity of late Silurian jawed vertebrates than has previously been recognised [54]. For example, Silurian placoderm fossils are rare and putative in Eurasia and Gondwana but were apparently well established in the South China block based on well preserved and unambiguous remains, including antiarchs and arthrodire-like taxa [11,12,13,54]. With regards to bony fishes, the Ludlow Xiaoxiang Fauna displays a far greater diversity of osteichthyans than any other fossil assemblage of comparable age. Outside of the South China block, the pre-Devonian osteichthyan fossil record primarily comprises only two genera based on highly fragmentary remains. Lophosteus, with four described Silurian species and additional Devonian forms, is a widely distributed genus from the Baltic region, the Timan-Rechora region Arctic Canada and Australia, ranging in time from the Ludlow to Lochkovian [50,55,56]. Andreolepis, based on two species, is known from the Baltic region, northern Timan, the Central Urals and the Novavya Zemlya and Severnaya Zemlya archipelagos with a mid-Ludlow to Early Pridoli time range [33,52,56,57,58]. This contrasts sharply with the diversity of osteichthyans present in Ludlow Xiaoxiang Fauna with at least six taxa in the combined assemblage. Among these from the Kuanti Formation are the two only articulated Silurian taxa, Guiyu oneiros and Sparalepis tingi, as well as the largest pre-Devonian vertebrate, Megamastax amblyodus (Fig 9). At least two additional Kuanti forms are currently awaiting description based on recently prepared articulated specimens within the IVPP collections. Of the two scale-based Xiaoxiang taxa, Naxilepis gracilis is present in the Kuanti and the overlying Miaokao Formation while Ligulalepis yunnanensis is restricted to the Miaokao [10]. Pridoli sediments from South China are not as well sampled, but material from the Yulungssu Formation includes an indeterminable osteichthyan [5] and Psarolepis romeri, a species also present in roughly contemporaneous deposits in Vietnam [5,16,21]. The Devonian record of South China, combining the oldest fossil appearances of key groups and containing the most phylogenetically basal fossil taxa have established this region as a possible center of origin for several crown-group sarcopterygian lineages, including anatomically modern coelacanths [59], lungfishes [60] and tetrapodomorphs [61][62][63]. The discovery of diversified stem-sarcopterygians in the Silurian of South China reveals a rich regional history of osteichthyans extending as far back as the early Ludlow, indicative of an as yet largely unknown chapter in early gnathostome evolution well before the advent of the Devonian "Age of Fishes". Field methods and preparation The holotype (IVPP V17915) is permanently housed and accessible for examination in the collections of the Institute of Vertebrate Paleontology and Paleoanthropology (IVPP), Chinese Academy of Sciences, 142 Xizhimenwai Street, Beijing 100044, China. The fossil block was collected from the muddy limestone of the Kuanti Formation (Late Ludlow, Silurian) in Qujing, Yunnan, China and prepared mechanically by IVPP staff using pneumatic air scribes and needles under microscopes. No permits were required for the described study. Nomenclatural acts The electronic edition of this article conforms to the requirements of the amended International Code of Zoological Nomenclature, and hence the new names contained herein are available under that Code from the electronic edition of this article. This published work and the nomenclatural acts it contains have been registered in ZooBank, the online registration system for the ICZN. The ZooBank LSIDs (Life Science Identifiers) can be resolved and the associated information viewed through any standard web browser by appending the LSID to the prefix "http://zoobank.org/". The LSID for this publication is: urn:lsid:zoobank.org:pub:B876FB8A-6A89-4547-9B49-3D9B27C8AF61. The electronic edition of this work was published in a journal with an ISSN, and has been archived and is available from the following digital repositories: PubMed Central, LOCKSS.
Targeted expression of a conditional oncogene in hematopoietic cells of transgenic mice. We have produced two lines of transgenic mice in which the expression of temperature-sensitive SV-40 large T antigen is targeted to bone marrow megakaryocytes via the platelet factor 4 (PF4) tissue-specific promoter. The progeny of these transgenic mice were observed for about 3 mo, and no malignancies were detected over this period of time. The offspring of these transgenic mice, 6- to 12-wk of age, served as a source of bone marrow cells, which upon in vitro cultivation at the permissive temperature yielded immortalized cell lines (MegT). At the permissive temperature, MegT cells exhibit the characteristics of early 2N and 4N megakaryocytes which include the presence of specific gene products such as PF4, glycoprotein IIb, acetylcholinesterase, and CD45 as well as the absence of molecular markers of other cell lineages such as the macrophage marker Mac-1, the T helper cell marker CD4, the mast cell marker IgE, the T cell marker CD2 or the erythroid cell marker alpha-globin. The inactivation of the oncogene by a shift of temperature from 34 degrees to 39.5 degrees C produces a reduction in the frequency of the 2N cells, in conjunction with the appearance of 8N and 16N cells, consisting of 27 and 3% of total cells, respectively. Thus, we have generated hematopoietic cell lines that are trapped in the early stages of megakaryocyte commitment, but able to undergo part of the normal program of terminal differentiation. mice in which the expression of temperature-sensitive SV-40 large T antigen is targeted to bone marrow megakaryocytes via the platelet factor 4 (PF4) tissuespecific promoter. The progeny of these transgenic mice were observed for about 3 mo, and no malignancies were detected over this period of time. The offspring of these transgenic mice, 6-to 12-wk of age, served as a source of bone marrow cells, which upon in vitro cultivation at the permissive temperature yielded immortalized cell lines (MegT). At the permissive temperature, MegT cells exhibit the characteristics of early 2N and 4N megakaryocytes which inelude the presence of specific gene products such as PF4, glycoprotein Hb, acetylcholinesterase, and CD45 as well as the absence of molecular markers of other cell lineages such as the rnacrophage marker Mac-l, the T helper cell marker CIM, the mast cell marker IgE, the T cell marker CD2 or the erythroid cell marker ot-globin. The inactivation of the oncogene by a shift of temperature from 34 ° to 39.5°C produces a reduction in the frequency of the 2N cells, in conjunction with the appearance of 8N and 16N cells, consisting of 27 and 3% of total cells, respectively. Thus, we have generated hematopoietic cell lines that are trapped in the early stages of megakaryocyte commitment, but able to undergo part of the normal program of terminal differentiation. p LURIPOTENT bone marrow stem cells give rise to 2N megakaryoblasts which are subsequently converted to mature polyploid megakaryocytes as a result of DNA replication in the absence of mitosis and the development of or-granules which contain specific gene products (Hawiger et al., 1987). The mature megakaryocytes then fragment into small anucleate platelets which circulate within the blood. Past studies of megakaryocyte biochemistry and differentiation have been hampered because these cells constitute only 0.05-0.1% of the nucleated cells of bone marrow. The currently available cell lines, all derived from patients with leukemia, exhibit multilineage properties and/or low ploidy states (Martin and Papayannopoulou, 1982;Tabilio et al., 1983;Tabilio et al., 1984;Ogura et al., 1985;Sledge et al., 1986;Greenberg et al., 1988). Therefore, the use of a molecular biological approach to generate megakaryocytic cell lines which maintain lineage fidelity and become polyploid would constitute a significant advance in this field of research. We have used tissue-specific expression of a conditional oncogene in transgenic mice to produce immortalized megakaryocytic cell lines. In prior studies, transient expression experiments as well as transgenic mice were used to define the tissue-specific elements of the platelet factor 4 (PF4) ~ gene and the promoter region which allows selective expression in megakaryocytes and platelets (Ravid et al., 1991a,b). We now describe the generation of transgenic mice expressing the temperature-sensitive SV-40 large T antigen driven by the above tissue-specific regulatory domain (Meg-T mice). This conditional oncogene has previously been linked to retroviral vectors to immortalize in vitro precursors of neural cells (Frederiksen et al., 1988) and myogenic cells (Iujuidin et al., 1990). The offspring of the Meg-T transgenic mice serve as a source of primary bone marrow cells which yield immortalized 2N or 4N megakaryocytes when cultured at the permissive temperature and subsequently transform to higher ploidy states when cultured at the nonpermissive temperature. Thus, our investigations demonstrate that hematopoietic cells can be immortalized at a specific stage of commitment with a lineage specific-promoter linked to a conditional oncogene. Experimental Procedures Plasmid Construction and Generation of ITansgenic Mice. The plasmid 1. Abbreviations used in this paper: GM-CSF, granulocyte-macrophoge colony stimulating factor; IL, interleukin; IMDM, Iscove's modification of Dulbecco's medium; PF4, platelet factor 4; vWF, von-W'fllebrand Factor. PF4SVtsA58 was constructed by using pPF4GH (Ravid et al., 1991b) which contains the rat 1.1-kb PF4 promoter linked to the HGH gene. This construct has a unique EcoRI site at the 3' end of the HGH gene and a unique Banll site 20-bp downstream of the transcriptional start. A unique KpNI site was introduced at the 5' end of the PF4 promoter in pPF4GH, via linkers, to produce pPF4GHr~. This later plasmid was digested with BanlI and EcoRI to remove the HGH gene and a BglH/ECORI SVtsA58 fragment (Frederiksen et al., 1988) was introduced, via BanH/BglII tinkers, to generate pPF4SVtsA58. The orientation of insertion of the SVtsA58 gene was confirmed by DNA sequencing. The PF4 promoter/SVtsA58 gene of 3.6 kb was obtained by cutting pPF4SVtsA58 with KpNI and EcoRI and purifying the appropriate fragment by agarose gel electrophoresis. This fragment was used to produce transgenic mice as described previously (Ravid et al.,199la). Foster mice females were of ICR strain (Harland, Frederick, MD) and the microinjected eggs were of FVB strain (Taconic Farms, Germantown, NY). Mice were screened for transgene integration by Southern blot analysis of tail DNA (Hogan et al., 1986) and transgene expression was detected by Northern blot analyses (Ravid et al., 1991a). Cultured Cells. Mouse bone marrow cells were isolated and cultured as described previously (Ravid et al., 1991/7). The cells were grown in 5% CO2 at 34°C in a liquid culture under conditions that were shown before to support maturation and ploidy of primary megakaryocytes (Kuter et al., 1989). To this end, the cells were grown in the presence of Iscove's modification of Dulbecco's medium (IMDM) supplemented with penicillin (2,000 U/mi), streptomycin (200/tg/ml), L-glutamine (0.592 mg/ml), horse serum (20%), and the hemopoietic growth factors erythropoietin (1 U/ml), Interleukin (IL)-3 (5 ng/ml), IL-6 (5 ng/ml), and granulocyte-macrophage colony-stimulating factor (GM-CSF) (5 ng/ml). The immortal cells derived were cloned by seeding 200-500 cells/10 ml IMDM supplemented as above in a 100-mm-diam culture dish. Individual colonies were isolated with a glass cloning ring, removed from the plate by trypsinization and grown in a 35 mm dish. To induce cell differentiation, 0.5-1 × 105 cells were seeded into a 25 cm 2 culture flask in the presence of 5 ml of the above medium and incubated in 5% CO2 at 39.5°C for 4 to 5 d. Cells were counted by hemocytometer and cell death was followed by staining with "lYypan blue. Immunofluorescent Staining. Cell lines were grown on glass cover slides for 3 to 4 d, bone marrow cells were spun onto a polylysine-treated slide in a Cytospin 2 (Shannon, Pittsburgh, PA) and both cell types were subjected to immunofluorescence staining as described in (Ravid et al., 1993) except that fixation was carried out with 100% methanol for 2 rain at room temperature. (FACS). For analysis of DNA content per cell, adhering cell lines were trypsinized and then washed with culture media whereas bone marrow ceils were harvested in CATCH (Ravid et al., 1991b) and both cell types were stained with Hoechst dye (Ravid et al., 1991a). Flow cytometric analysis and FACS were carried out on a FACStarplns flow cytometer (a registered trademark of Becton Dickinson, San Jose, CA) (Kuter et al., 1989;Ravid et al., 1991a) and DNA histograms were analyzed by a cell cycle analysis program (Modfit Verity Software House, Inc., Topsham, ME). The ploidy analysis of mouse bone marrow cells, using a rat platelet antibody, was done as described (Kuter et al., 1989). A coefficient of variation of the 2N peak was maintained at 2.2 to 3.0% by alignment of the optical system. Flow Cytometric Analysis and Fluorescence Activated Cell Sorting Northern Blot Analyses and Polymerase Chain Reaction. Total RNA was prepared from cells grown at 34 ° or 39.5°C and subjected to Northern blot analyses as described in (Ravid et al., 1991a). The different probes used were cDNAs of GPHb (gift of Dr. M. Foncz), vWF (gift of Dr. D. Lynch), wglobin (gift of Dr. L. Gehrke), PF4 (Doi et al., 1987), and the gene coding for T antigen (Frederiksen et al., 1988). Low levels of messenger RNA were detected by the PCR as described in (Ravid et al., 1991b). The PCR primers used to detect CD2 (Diamond et al., 1988) were: 5'GGTCGGTGCAGG-AGG3' (sense) and 5'CGA~CAGGTGTCY (antisense) which extend 267 bp. Generation of Transgenic Mice Containing the PF4 Promoter Linked to the Temperature-sensitive Mutant of SV-40 Large T Antigen The construct used to generate transgenic mice contains 1,104 bp of the 5' upstream region of the rat PF4 gene linked to the tsA58 mutant SV-40 large T antigen (PF4SVtsA58) (Fig. 1 a). The oncogene used encodes a mutant gene product which is rapidly degraded by raising the temperature from 34 ° to 39°C (Tegtemeyer 1975a,b). Given that the mouse body temperature is about 38°C (Kaplan et al., 1983), large T protein should be at very low concentrations or absent under in vivo conditions. The above segment of the PF4 promoter was selected because our previous transgenic studies demonstrated that the critical tissue-specific regulatory domain is located within this region (Ravid et al., 1991a). PF4SVtsA58 was microinjected into pronuclei of fertilized mouse eggs and the injected embryos were implanted into pseudopregnant outbred females. The offspring were screened for transgene integration by Southern blot analyses of tail DNA digested with PstI, using the SVtsA58 gene as a probe (Fig. 1 b). Five founder mice were identiffed, of 33 mice produced, exhibiting 1 to 10 copies of PF4SVtsA57 as determined by comparison with diluted linearized control DNA. While not very low, the percentage of transgenic mice obtained is lower than our usual results with other constructs. We do not rule out the possibility that this is related to an effect of SV-40 large T antigen in early stage embryos. The transgene was integrated into a single chromosomal site in a head-to-tail fashion, as judged by the 2.5and 1.1-kb DNA fragments obtained after digestion with PstI. The offspring of the founders were tested for T antigen expression in bone marrow cells and other tissues by Northern blot analyses (Fig. 1 c). The results indicated that the bone marrow cells of the offspring of founders 10 and 18 possessed a low level of oncogene mRNA. However, the liver, skeletal muscle, heart, brain, kidney, adrenal, and lung of offspring from the above founders were negative for transgene expression. This observation confirms the tissue-specific nature of the PF4 promoter used. Analyses of the Hematopoietic System of Transgenic Mice We have carefully monitored alterations in the transgenic mice colony over a period of about three months. No abnormalities were recorded except for the sudden death of founder 10 at the age of 11 wk. Unfortunately, it was not possible to carry out an autopsy on this animal. The platelets and megakaryocytes from the blood and bone marrow of transgenic mice (ages 6-12 wk) were examined by immunohistochemical methods to determine whether large T antigen was present. The results showed that these cell types possessed no detectable level of oncogene protein under in vivo conditions (not shown). Platelets were also counted in whole-blood samples collected from age-matched transgenic and nontransgenic mice. The mean platelet counts, as determined by hemocytometer (Kuter et al., 1989), were 680 + 102 × 103/~1 (n = 3) in transgenic mice and 730 + 90 × 103//~1 (n = 3) in non-transgenic mice. The extent of bone marrow megakaryocyte ploidy was also evaluated in two transgenic (offspring of founder 10) and non-transgenic 4.87, and 10.2% for 2N, 4N, 8N, and 16N or higher, respectively (the results are from a representative experimen0. The relatively high percentage of mouse 2N megakaryocytes detected as compared to that reported in rats and mice (Corash et al., 1987;Kuter et al., 1989) may have been due to low level nonspecific binding of antibody to nonmegakaryocytic mouse bone marrow cells which comprise 99.9% of cells analyzed. (18, 17, 14, 10, and 27) as well as the lung (L), spleen (S), heart (H), brain (B), skeletal muscle (SK), adrenal (A), and liver (/2) of the offspring of founder 18. The blots were probed with the T antigen gene fragment and the bands shown correspond to the T antigen message. Equal loading of RNA was confirmed by ethidium bromide staining of the ribosomal bands 28S (5 kb) and 18S (2.5 kb). The experimental details are provided in Materials and Methods. mice. The above parameter was determined by identifying megakaryoeytes by cell surface labeling with antibody to rat platelets and then estimating the amount of DNA per cell by flow cytometry as described previously (Kuter et al., 1989). A non-transgenic mouse exhibited a megakaryocyte ploidy distribution of 39.2, 17.7, 5.6, and 10.4% for 2N, 4N, 8N, and 16N or higher, respectively, whereas a transgenic mouse possessed a megakaryocyte ploidy distribution of 40.8, 16.8, Establishment of Bone Marrow Cell Lines Bone marrow cells were derived from the offspring of all transgenic mice and cultured at the permissive temperature of 340C in a liquid culture as described in Materials and Methods. Centrifugation was used to replace 50 % of the media every 4 d and resuspended cells were returned to the original dish. After 4 wk of culture, the nonadhering and adhering cells originating from the offspring ofail founders, except 10 and 18, died. Within days, colonies of adhering cells derived from the offspring of founders 10 and 18 filled the dishes. These cells were cloned by the ring cloning technique, expanded to obtain stocks and subsequently frozen. Clone 37 derived from offspring 37 of founder 10 (designated as MegT37) and clones 1 and 8 from founder 18 (designated as MegTl and MegTS) were further characterized. At 34°C, cells were spindle shaped and adherent and showed no change in morphology up to ,,030 passages. At 39.50C, cells attached with an efficiency of 50-80 %, and then gradually detached and rounded up over 4 to 5 d in culture after which cell death was observed. About 30% of the ceils were identified as dead at day six in culture at 39.5°C. It should be pointed out that immunofluorescence staining with an antibody to large T antigen revealed that the oncogene product was degraded only after about 2 d in culture at 39.5°C, after which the cell number did not increase (not shown). The cells MegT37 and MegT1 grew at 34°C with a doubling time of 22 h while MegT8 grew with a doubling time of 30 h. However, upon prolonged culturing MegT8 displayed a doubling time of 24 h. When the immortal cells were cultured at 37°C the cells remained adhering to the dish and continued cycling, albeit, with a doubling time of 42 h. This later observation correlates with our finding that at 37°C the cultured cells still possess the oncogene product (not shown). Lineage Properties of the Cell Lines The identity of the cell lines was established by documenting the presence of lineage specific markers with immunohistochemical and Northern blot analyses. MegT37 exhibited a strong positive staining with an antibody to rat PF4 (Fig. 2, a and b). As controls, all mouse bone marrow cells with the exception of megakaryocytes were negative (Fig. 2, c and d). MegT37 cells alsO showed weak positive staining with an antibody to CD45 antigen, but were otherwise negative with antibodies to the macrophage-specific antigen Mac-l, T helper cell specific antigen CD4 and mast cell specific antigen IgE (not shown) (Spangrude et al., 1988). MegT37 also possessed the megakaryocytic marker acetylcholine esterase (Jackson, 1973) (Fig. 2, e and f). Similar results were obtained with the other cell lines (not shown). Northern blot analyses were used to detect expression of (4) or of normal mouse bone marrow cells (C) stained with an antibody to rat PF,4 (Doi et al., 1987). Phase-contrast photomicrographs of the same MegT37 cells cultured at 34°C (B) or the same normal mouse bone marrow cells (D). Staining for acetylcholine esterase for six-seven hours in the absence of (E) or presence of (F) an enzyme inhibitor (0.5 mM diisopropylfluoropbosphate) as described in (Iackson, 1973). Original magnification: ×400 (A-D), or ×100 (E and F). The experimental details are provided in Materials and Methods. The Iournal of Cell Biology, Volume 123, 1993 markers on the cell lines to which antibodies to rodent antigens are not available and to compare the levels of the T antigen message in the different cell lines. A significant amount of the large T antigen message was noticed in all cell lines with the order MegT1 > MegT37 > MegT8 (Fig. 3). The mRNA for von-Willebrand Factor (vWF) was not detected in the cells (not shown) nor was the erythroid marker ot-globin (Nishioka and Leder, 1979) while the messages for the megakaryoeytic markers GPllb and the PF4 were detectable either at 34* or at 39.5°C (Fig. 3). As also seen in Fig. 3, the level of the PF4 message in all the cell lines cultured at the permissive conditions was similar to the level detected when the ceils were cultured at 39.5°C while the level of the GPHb message increased by about two fold upon shifting the cells to 39.50C. The existence of the PF4 and GPIIb messages in the cell lines was also confirmed by amplifying reverse transcribed mRNA by the polymerase chain reaction (data not shown). This method was also used to exclude the presence of mRNA for the T cell surface marker CD2 (Diamond et al., 1988) to which rodent cDNA was not available to us. The results shown in Figs. 2 and 3 and in the text are summarized in Table I which lists all the lineage markers tested and indicates their presence or absence in our MegT cell lines. In Vitro Ploidy of the Megakaryocytic Cell Lines To ascertain whether inactivation of large T antigen would allow MegT37 to become polyploid, the cell line was analyzed for DNA content per cell by flow cytometry as outlined in Materials and Methods. The localization of the 2N peak was confirmed by using mouse bone marrow cells as control (Fig. 4 a). When grown at 34°C, the majority of MegT37 (95 %) exhibited 2N or 4N nuclei, with very few cells possessing 8N nuclei (Fig. 4 b). When grown at 39.5°C, the frequency of MegT37 with 2N nuclei decreased to about 25 %, and those with 8N nuclei increased to about 30 %, with some cells possessing 16N nuclei (Fig. 4 c). Phase-contrast photomicrographs of the cells grown at 340C (Fig. 4 d) or at 39.50C (Fig. 4 e) were taken before flow cytometry analyses. At 34°C, cells were spindle shaped and adherent while at 39.5°C a large fraction of the cells detached from the plate. All of these non-adhering cells appeared oval or round and about 70% of them had a diameter larger than that of a 2N cell (>10-15/~m), large nuclei and multiple nucleoli. To exclude the possibility that cell fusion might have generated MegT37 with nuclei greater than 4N, two pools of cells were separately labeled with red fluorescence dye PKH2-GL or green fluorescence dye PKH26-GL before cultivation. The labeled cells were mixed in equal numbers, grown at 34 ° or 39.5°C for 4 to 5 d, stained with Hoechst dye, and analyzed by flow cytometry. We then determined the single and double 5) and MegT37 (lanes 6 and 7) cultured at 34°C (lanes 2, 4, and 6) or 39.5°C (lanes 3, 5, and 7), was electrophoresed (1% agarose) per lane; equal loading was confirmed by ethidinm bromide staining of the ribosomal bands 28S (5 kb) and 18S (2.5 kb) (A). The gel from A was subjected to Northern blot analysis and probed with a-globin cDNA, PF4 cDNA, and GPI~ cDNA. RNA was also prepared from cells grown at 34°C, subjected to Northern blot analysis and probed with the T antigen gene (~g) (B). The experimental details are provided in Materials and Methods. The blots were either subjected to autoradiography or scanned by Betascope (Betagen Co., Waltham, MA). Cells grown at 39.5°C were also subjected to electron microscopic analysis (F) as described before (Ravid et al., 1993). Original magnification: x5538. The arrows point to the different lobes in the large multilobulated nucleus which occupies a large portion of the cytoplasma, thus resulting with a high nuclear-cytoplasmic ratio. This morphology is characteristic of polyploid megakaryocytes. labeled cells in the total population and within each ploidy class. Within the total cell population, 7--8 % of the cells were double labeled at 34°C and 10-11% at 39.5°C. The generation of double labeled cells is most likely to occur because of cell fusion but a small contribution from dye transfer can not be excluded. The percentage of cells undergoing fusion in a cell population should be equivalent to three times the percentage of double labeled cells (red, green; green, green; red, red). Therefore, cell fusion could generate a maximum of "~ 30 % of the polyploid megakaryocytes. Similar results were obtained when each ploidy class was separately analyzed (data not shown). Based on these analyses, we conclude that at least 70 % of the polyploid MegT37 cells must have arisen from 2N megakaryocytes by DNA replication without cell division. Similar ploidy analyses carded out with MegT1 and MegT8 cells revealed a minimal number of ~>8N cells (not shown). The electron microscopic analyses of MegT37 grown at 39.5°C confirmed the presence of a large multilobulated nucleus and a high nuclear-cytoplasmic ratio which is characteristic of polyploid megakaryocytes. However, typical t~-granules were absent while lysosomes were readily apparent ( Fig. 4 f). The large nucleus and lysosomes were not present in cells grown at 34°C (not shown). Several attempts were made to induce MegT37 to convert to cells with 16N and 32N nuclei at frequencies observed in normal murine bone marrow (Corash et al., 1987;Kuter et al., 1989). These manipulations included addition of hemopoietic growth factors (see Methods) and phorbol ester or dimethylsulfoxide, known to induce differentiation in erythroleukemia cell lines (Greenberg et al., 1988), as well as co-culture of MegT37 (prelabeled with fluorescent dye) with normal mouse bone marrow. No augmentation in the extent of ploidy of MegT37 was observed. It should also be noted that differentiation of the cell line in agar cultures at 39.5°C under colony forming unit conditions (Chatelain et al., 1983) was unsuccessful because of the poor stability of agar (1-2 %). Discussion Several different oncogenes encode nuclear proteins which are able to immortalize somatic cells in vitro. In some cases, oncogene expression induces the formation of tumor cells which synthesize many of the major differentiation products of normal cells from which they are derived (Amsterdam et al. 1988;Garcia et al., 1986;Moura Neto et al., 1986;Muller and Wagner, 1984). In other situations, oncogenemediated cell immortalization inhibits expression of the ter-minaUy differentiated state (Beug et al., 1987;Cherington et al., 1986;Dmitrovsky et al., 1986;Falcone et al., 1985). The availability of conditional oncogenes makes it potentially possible to establish cell lines which cycle at the permissive conditions, and differentiate under nonpermissive conditions in the presence of appropriate growth factors. Indeed, retroviral vectors driving the expression of conditional oncogenes such as SV-40 large T or myc have been used in vitro to produce cell lines having the potential to differentiate (Frederiksen et al., 1988;Iujvidin et al., 1990;Eilers et al., 1989). Viral and cellular non-conditional oncogenes have been also used in vivo to produce transgenic mice (Leder et al., 1986;Andres et al., 1987;Bender and Pfeifer, 1987;Mahon et al., 1987;Efrat et al., 1988;Harris et al., 1988;Reynolds et al., 1988). In each case, tumors were noted in tissues that are targets for high levels of oncogene expression which frequently caused the death of the transgenic mice. In the present study, we attempted to generate hematopoietic cell lines that could provide models for lineage program-ruing decisions. The retroviral-mediated introduction of oncogenes into pluripotent mouse hematopoietic cells has previously been reported (Williams et al., 1984). However, we used the PF4 promoter linked to a temperature-sensitive SV-40 large T antigen to target oncogene expression in transgenic mice to a specific hematopoietic cell lineage. The use of this conditional oncogene allowed the large T antigen to be maintained in an inactive state in transgenic mice, which have a body temperature higher than the permissive one. This later feature permitted the mice to exhibit normal hematopoietic cell function for at least three months and prevented the emergence of hematopoietic tumors during this period of time. However, it is possible that the death of founder 10 was due to the leaky nature of the conditional oncogene in conjunction with slight variations in body temperature. Given that the focus of our work was to generate immortalized megakaryocytes, we did not study potential changes in these mice with aging. The bone marrow cells obtained from the young transgenic mice were cultured at 34°C which yielded immortalized cell lines expressing megakaryocytic characteristics, e.g., PF4, GPI]b, and acetylcholinesterase. In contrast to bone marrow megakaryocytes, the immortalized cell lines do not contain vWE contain relatively low levels of PF4 mRNA, and consist mainly of cells with 2N and 4N nuclei. It may be that vWF is expressed in mature megakaryocytes of high ploidy class and thus will not be detected in our cell lines. It is possible that the low extent of PF4 expression is due to competition for trans-acting factors, the later resulting from the high copy number of the transgene. Alternatively, the low level of PF4 message may be typical of low ploidy bone marrow megakaryocytes. However, the PF4 protein was definitely detectable by antibody staining. The successful production of immortalized cell lines which exhibit lineage fidelity depends on the ability to induce cell differentiation in vitro. Our cell lines respond to a rise in temperature with inactivation of the oncogene and changes in cell behavior and morphology. The MegT37 cells initiate and progress along a differentiation pathway including arrest of cellular proliferation, enlargement of cell size, frequent detachment from the plate, increase in the nuclei/ cytosol ratio, and appearance of cells with 8N and 16N nuclei. Although a cocktail of hemopoietic growth factors was used, we were unable to stimulate MegT cells to become polyploid to the same extent as normal mouse bone marrow megakaryocytes nor to form a-granules. In normal megakaryocyte differentiation, CD34 positive cells initially exhibit platelet specific membrane glycoproteins and only later develop o~-granules and full ploidy (Debili, N., C. Issaad, J. M. Masse, J. Guichard, A. Katz, J. Berton Gorious, and W. Vainchenker. 1992. Blood. 80:126a). It is possible that the liquid culture system used at the high temperature lacks cellular interactions and/or an unknown growth factor that are critical for formation of high ploidy ceils and for a-granule formation in cells that have been subjected to immortalization. Thus, MegT cells might undergo normal maturation if transplanted into the bone marrow of irradiated mice. Based upon the above data, we believe that these cell lines constitute a new tool for investigating the biology of early megakaryocytes as well as of polyploid cells. We also believe that our molecular biological approach used to generate megakaryocytic cell lines should assist in future studies on oncogene-mediated immortalization of megakaryocytic cells.
Inhibition Mir-92a Alleviates Oxidative Stress and Apoptosis of Alveolar Epithelial Cells Induced by Lipopolysaccharide Exposure through TLR2/AP-1 Pathway Objective To probe into the role of miR-92a in alleviating oxidative stress and apoptosis of alveolar epithelial cell (AEC) injury induced by lipopolysaccharide (LPS) exposure through the Toll-like receptor (TLR) 2/activator protein-1 (AP-1) pathway. Methods Acute lung injury (ALI) rat model and ALI alveolar epithelial cell model were constructed to inhibit the expression of miR-92a/TLR2/AP-1 in rat and alveolar epithelial cells (AECs), to detect the changes of oxidative stress, inflammatory response, and cell apoptosis in rat lung tissues and AECs, and to measure the changes of wet-dry weight (W/D) ratio in rat lung tissues. Results Both inhibition of miR-92a expression and knockout of TLR2 and AP-1 gene could reduce LPS-induced rat ALI, alleviate pulmonary edema, inhibit oxidative stress and inflammatory response, and reduce apoptosis of lung tissue cells. In addition, the TLR2 and AP-1 levels in the lung tissues of ALI rats were noticed to be suppressed when inhibiting the expression of miR-92a, and the AP-1 level was also decreased after the knockout of TLR2 gene. Further, we verified this relationship in AECs and found that inhibition of miR-92a/TLR2/AP-1 also alleviated LPS-induced AEC injury, reduced cell apoptosis, and inhibited oxidative stress and inflammatory response. What is more, like that in rat lung tissue, the phenomenon also existed in AECs, that is, when the expression of miR-92a was inhibited, the expression of TLR2 and AP-1 was inhibited, and silencing TLR2 can reduce the expression level of AP-1. Conclusion MiR-92a/TLR2/AP-1 is highly expressed in ALI, and its inhibition can improve oxidative stress and inflammatory response and reduce apoptosis of AECs. Introduction Acute lung injury (ALI) is an acute respiratory distress syndrome (ARDS) with clinical characteristics of acute hypoxic respiratory failure and bilateral pulmonary infiltration caused by multiple factors in and out of the lung. Due to the lack of specific treatment, the mortality rate can reach up to 40% [1][2][3]. Alveolar epithelial cells (AECs), as the main sites for gas exchange, are also one of the main components of respiratory barrier [4]. During ALI, AECs are always found damaged more, the generation of surfactant reduced, lung compliance decreased, and gas exchange blocked [5,6]. Short noncoding RNA (miRNAs) are a class of small RNA with a length of about 18-25 nucleotides. By binding to messenger RNA (3 ′ -UTR) and inducing RNA silencing or degradation through miRNA-induced silencing complexes, they negatively regulate protein-coding genes' expression and participate in the regulation of various cellular processes, including inflammatory responses [7,8]. In recent years, multiple literature has reported that miRNAs have a part to play in ALI as markers of acute lung injury and diffuse alveolar injury [9]. A case is that in the study of Song et al. [10], miR-34a could target FoxO3 to inhibit autophagy of alveolar type II epithelial cells in ALI and reduce the damage of lipopolysaccharides (LPS)-induced ALI. MiR-92a has also been found to act on development of inflammatory responses in ALI rats, and its inhibition can reduce the secretion of proinflammatory factors and improve inflammatory responses [11]. MiRNAs are important regulators of Toll-like receptor (TLR) signaling, among which TLR2 has been reported as one of the targets of miR-92a, which can alleviate liver fibrosis caused by Schistosoma japonicum [12]. In addition, Lai et al. [13] demonstrated that TLR-mediated decrease in miR-92a expression can promote the production of inflammatory cytokines in TLR-induced macrophages. What is more, Fei et al. [14] revealed that in ALI, glycyrrhizic acid can block the TLR-2 signal cascade to inhibit the inflammatory response induced by ischemia-reperfusion lung injury. Activator Protein-1 (AP-1) is an upstream regulator of interleukin-4 (IL-4), which also participates in the ALI process. Khan et al. [15] showed that Anomalin could inhibit AP-1 to relieve mechanical pain and inhibit leukocyte infiltration in ALI rats. And AP-1 is regulated by TLR2 [16,17]. Based on preceding research, we hypothesized whether there is a signaling axis such as miR-92a/TLR2/AP-1 that is effective in the occurrence and progression of ALI. Hence, in this study, we analyzed the effect of miR-92a/TLR2/AP-1 on ALI AECs. Rat Source. A total of 100 healthy Sprague-Dawley rats, aged 10 weeks and weighed 250-300 g, were obtained from the Experimental Animal Center of Harbin Medical University. The rats were kept at room temperature of 20-25°C, relative humidity of 40%-70%, normal 12 h circadian rhythm, and they were free to eat and drink. All animal experiments in present study were approved by the Animal Care and Use Committee of our hospital and followed the guidelines of the Council for International Organizations of Medical Sciences (CIOMS). ALI Rat Model Construction and Observation Index. All the rats were grouped into a control group (CG), a model group (MG), a miR-92a inhibitor group, a TLR2(-) group, and an AP-1(-) group at random, with 20 in each group. TLR2 and AP-1 gene knockout were performed on the rats of TLR2(-) group and AP-1 group by Sigma company, while the CG was left untreated. The MG, miR-92a inhibitor group, TLR2(-) group, and AP-1(-) group were intratracheally infused with 5 mg/kg LPS. Meanwhile, miR-92a inhibitor group was injected via tail vein with a concentration of 100 mg/kg pretreated with liposome 2000 miR-92a inhibitor carrier, and miR-92a inhibitor vector was designed and synthesized by Sigma Company. Twenty-four hours later, pentobarbital sodium (45 mg/kg, Sigma) was intraperitoneally injected into the rats to anaesthetize and kill according to the guidelines for the Care and Use of Laboratory Animals, and the lung tissues were isolated. Then, the W/D ratio, apoptosis rate, oxidative stress, and inflammatory factors in the lung tissues of rats in the five groups were detected. Detection Methods. W/D Ratio. After removing both lungs from the bronchi of free rats, the left lung was taken as the object of examination. The fresh lung tissue was weighed and baked to constant weight in an oven at 80°C. The W/D ratio = dry weight/wet weight. Apoptosis Rate. Half of the right lung tissue was cut into pieces and centrifuged at 300 × g for 1 min at 4°C in a filter centrifuge tube with a diameter of 35 μm. After removing the supernatant, phosphate buffer solution was added to resuspend the precipitated cells, and the cell concentration was adjusted to 1 × 10 6 ml. Then, AnnexinV-FITC and PI were added successively to incubate in the dark at room temperature for 5 min, followed by the detection conducted by FACSCalibur flow cytometry (BD Biosciences, CA, USA). The experiment was repeated 3 times to take the average. The Annexin V-FITC/PI apoptosis detection kit was purchased from Invitrogen, USA, under the article number V35113. Detection of Oxidative Stress and Inflammatory Response Levels. The remaining half of the right lung tissue was placed in a glass homogenate tube, and the tissue was ground into a pulp. The levels of tumor necrosis factor α (TNF-α), interleukin-6 (IL-6), interleukin-10 (IL-10), malondialdehyde (MDA), and superoxide dismutase (SOD) in lung tissues were detected by ELISA with reference to the kit instructions. The MDA, TNF-α, IL-6, and IL-10 ELISA kits were purchased from Guduo Biotechnology Co., Ltd., Shanghai, China, with the article numbers of GD-BN1921, GD-DS1716, GD-DS1726, and GD-DS1731, respectively, and SOD ELISA kit was obtained from Jingkang Bioengineering Co., Ltd., Shanghai, China, with the article number of JLC2390. Cell Grouping and Treatment. Cells were divided into 5 groups, namely the blank group (BG), LPS group, inhibitor group, si-TLR2 group, and si-AP-1 group. Concerning the treatment of cells in each group, those in the BG were left untreated, while those in the inhibitor group were transfected with miR-92a inhibitor vector, those in the si-TLR2 group were transfected with si-TLR2 vector, and those in the si-AP-1 group were transfected with si-AP-1 vector. The si-TLR2 and si-AP-1 vectors were designed and synthesized by Sigma, and the corresponding vectors were transfected by means of Lipofectamine 2000 kit (Thermo Fisher China). In addition to the BG, the other 4 groups of cells were supplemented with 1 g/ml of LPS. After 24 hours of culture, the apoptosis rate and changes in oxidative stress indicators in the cells were detected, with the same detection method as above. 2.6. qRT-PCR. Trizol kit (Invitrogen) was applied to separate the Total RNA from tissues/cells. The EasyScript One-Step RT-PCR SuperMix kit was acquired from Transgen Biotechnology, Co., Ltd, Beijing, China, and the specific detection steps were referred to the kit instructions. RNA Template: 1 μg, Forward GSP (10 μM): 0.4 μL, Reverse GSP (10 μM): 0.4 μL, 2 * One-Step Reaction Mix: 10 μL, EasyScript One-Step Enzyme Mix: 0.4 μL, and RNase-free Water was added to complete the reaction volume of 20 μL. The reaction conditions were 40°C for 30 min, 94°C for 5 min, 94°C for 30s, 60°C for 30s, 72°C for 2 kb/min, and 72°C for 10 min, totaling 40 cycles. Three replicate wells were set up in the experiment, and U6 was used as the internal reference for the reaction. The results were analyzed by 2 -△Ct method. 2.7. Western Blot. The protein in the cells/tissues was extracted by repeated freeze-thaw method, and the protein concentration was determined with the aid of BCA. Next, the protein was made to 4μg/μL, electrophoretically separated by 12% SDS-PAGE before it was processed under the initial voltage of 90 V, and then an increased voltage of 120 V to shift the sample to the suitable site on the separation gel. Upon the completion of electrophoresis, the membrane was transferred, with 100 V constant pressure for 100 min and sealed for 60 min at 37°C. Then, the membrane was placed in 5% skim milk for sealing before immune reaction. The membrane was subsequently cultivated overnight at 4°C after adding with primary antibody (1 : 1000), followed by warm washing with PBS three times the next day, 5 min each, and then incubated with secondary antibody (1 : 1000) at room temperature for 1 hour. After that, ECL luminescent reagent was developed and fixed. Quantity One software was employed for statistical analysis of the bands after film scanning, and the protein's relative expression level was equal to the gray value of the bands/the grays value of the internal parameters. BCA protein kit, trypsin, and ECL luminescence kit were all acquired from Thermo Scientific™, with the corresponding article number of 23250, 35055, and 90058. Rabbit anti-TLR2 monoclonal antibody, rabbit anti-AP-1 polyclonal antibody, rabbit anti-bax, goat antirabbit IgG secondary antibody, and bcl-2 monoclonal antibody were obtained from Abcam, USA, under the article numbers of ab209217, ab21981, ab32503, ab185002, and ab6721, respectively. 2.8. Statistical Analysis. SPSS19.0, which was purchase from Chicago, IL, USA, was employed for statistical analysis of the collected data. The measurement data was described in the form of mean ± SD. The comparison between two groups was conducted using Student's t-test, while that among multiple groups was carried out by one-way ANOVA. LSD test was adopted for post hoc test. Two-tailed P < 0:05 was considered statistically significant. Graph-Pad Prism 8.0 (La Jolla, CA) was responsible for picture drawing. Results 3.1. Effects of miR-92a/TLR2/AP-1 on ALI Rats 3.1.1. Analysis of miR-92a/TLR2/AP-1 Level in 5 Groups of Rats. After LPS stimulation, the level of miR-92a/TLR2/AP-1 in the lung tissues in the MG was noticeably higher than that in the CG (P < 0:05). When compared with the MG, the miR-92a in the lung tissue in the miR-92a inhibitor group was decreased (P < 0:05), the TLR2 in the TLR2(-) group was declined (P<0.05), and the AP-1 in the AP-1(-) group was reduced (P < 0:05). Moreover, it was found that after inhibiting the expression of miR-92a, the expression levels of TLR2 and AP-1 in the lung tissues of the miR-92a inhibitor group were also lower than those of the MG (P < 0:05), and the AP-1 in the TLR2(-) group also went down while inhibiting the expression of TLR2 (P < 0:05) (Figure 1). Changes of W/D Ratio in ALI Rats' Lung Tissues. Compared to the CG, the W/D ratio of lung tissues in the MG increased remarkably after LPS stimulation, but decreased in the miR-92a inhibitor group, TLR2(-) group, and AP-1(-) group after inhibiting the expression of miR-92a/TLR2/AP-1. The W/D ratio of lung tissues differed little among miR-92a inhibitor group, TLR2(-) group, and AP-1(-) group (P > 0:05) (Figure 2). Changes of Apoptosis Level in ALI Rats' Lung Tissues. Compared with the CG, LPS stimulation markedly elevated the apoptosis rate and bax and bcl-2 levels in the MG (P < 0:05), while inhibition of miR-92a/TLR2/AP-1 expression resulted in decreased apoptosis rate and bax levels, and increased bcl-2 levels in the miR-92a inhibitor group, TLR2(-) group, and AP-1(-) group than the MG (P < 0:05). The level of apoptosis in the lung tissues did not identify any significant difference among the miR-92a inhibitor group, TLR2(-) group, and AP-1(-) group (P > 0:05) (Figure 3). Changes of Oxidative Stress Levels in ALI Rats' Lung Tissues. For the purpose of evaluating the effect of miR-92a/TLR2/AP-1 on oxidative stress levels in ALI rats' lung tissues, we observed the changes of oxidative stress in the lung tissues of ALI rats. It turned out that after LPS stimulation, the oxidative stress levels in the lung tissues of rats in the MG rose dramatically, SOD level dropped (P < 0:05), and MDA level boosted (P < 0:05). However, after inhibiting the expression of miR-92a/TLR2/AP-1, the levels of oxidative stress in ALI rats' lung tissues decreased, the SOD level elevated (P < 0:05), and the MDA level declined (P < 0:05). In particular, the inhibitory effect of miR-92a on oxidative stress was the most obvious. Among the three groups of miR-92a inhibitor group, TLR2(-) group and AP-1(-) group, the level of SOD in the lung tissues of miR-92a inhibitor group was the highest (P < 0:05) (Figure 4). 3.3. Effects of miR-92a/TLR2/AP-1 on Apoptosis Level of LPS Exposed AECs. After exposure to LPS, the apoptosis level of AECs increased dramatically, and the apoptosis rate and bax and bcl-2 levels in the LPS group were higher than those in the BG (P < 0:05). While inhibited miR-92a/TLR2/AP-1 led to decreased apoptosis level of AECs, and the apoptosis rate and bax and bcl-2 levels in the inhibitor group, si-TLR2 group, and si-AP-1 group were lower than those in LPS group, and the bcl-2 level increased (P < 0:05) (Figure 7). Discussion ALI is a refractory respiratory dysfunction disease whose pathogenesis is not yet fully understood, so specific prevention and treatment approaches have been lacking, leading to a high mortality rate globally [18]. MiRNAs exert marked effects on cell development, genomic imprinting, and regulation of cell functions [19]. With the exception of Fu et al. [11], little research in recent years has reported the part miR-92a plays in ALI. The present study once again verified the effect of miR-92a on ALI rats and further analyzed miR-92a's effects on the survival of AECs exposed to LPS. It was found that miR-92a attenuated the oxidative stress response and apoptosis of LPS-induced AECs through TLR2/AP-1 signaling axis. In this study, we revealed that both inhibition of miR-92a expression and knockout of TLR2 and AP-1 genes could reduce LPS-induced rat ALI, alleviate pulmonary edema, inhibit oxidative stress and inflammatory response, and reduce apoptosis of lung tissue cells. In addition, it was noticed that in ALI rats' lung tissues, TLR2 and AP-1 were BioMed Research International [20], while only Zhao et al. [12] reported that miR-92a could target the regulation of TLR2 expression. Apart from that, it is well established that AP-1 can be regulated by TLR2. As reported by Wan et al. [21], geranyl diphosphate synthase 1 alleviates ventilator-induced lung injury through the TLR2/4/AP-1 signaling axis. Moreover, the TLR2-JNK-AP-1 pathway also plays a crucial part in pulmonary fibrosis caused by Mycobacterium tuberculosis [22]. This also supports our speculation, and we verified this relationship in AECs. We found that inhibition of miR-92a/TLR2/AP-1 also alleviated LPS-induced AEC injury, reduced cell apoptosis, and inhibited oxidative stress and inflammatory response. What is more, like that in rat lung tissues, the phenomenon also existed in AECs, that is, when miR-92a was inhibited, the expression of TLR2 and AP-1 was also inhibited, and silencing TLR2 could reduce the expression level of AP-1. Therefore, it can be found from our study that miR-92a/TLR2/AP-1 is related to the onset and progression of ALI, and inhibition of it can effectively improve ALI symptoms and reduce apoptosis of AECs. MiR-92a is a gene closely associated with lung disease and has been shown in many studies to be bound up with lung cancer metastasis. For example, Hsu et al. [23] reported in their study that bone marrow-derived cells could release vesicles coated with miR-92a to promote liver metastasis of lung cancer. In the study of Borzi et al. [24], miR-92a was found to promote the proliferation of lung bronchial cells and establish an ecological environment for intralung metastasis of lung cancer. However, earlier evidence [25] only revealed that inhibited miR-92a could promote the WNT1inducible signaling pathway protein 1 (WISP1) in lung fibroblasts, a fibrogenic medium, but they did not further analyze fibroblast changes, so miR-92a may inhibit the development of pulmonary fibrosis. This suggests that miR-92a has various roles in different lung diseases, but it is unclear whether miR-92a also has a two-way effect in ALI, which needs to be verified by more studies. Although this study studied the role of miR-92a/TLR2/AP-1 in ALI from animal models in vivo and cell models in vitro, further clinical trials are still needed. To sum up, miR-92a/TLR2/AP-1 is highly expressed in ALI, and its inhibition can improve oxidative stress and inflammatory response and reduce apoptosis of AECs. Data Availability The corresponding data of this manuscript can be available if any researcher required.
Of, for, and by the people: the legal lacuna of synthetic persons Conferring legal personhood on purely synthetic entities is a very real legal possibility, one under consideration presently by the European Union. We show here that such legislative action would be morally unnecessary and legally troublesome. While AI legal personhood may have some emotional or economic appeal, so do many superficially desirable hazards against which the law protects us. We review the utility and history of legal fictions of personhood, discussing salient precedents where such fictions resulted in abuse or incoherence. We conclude that difficulties in holding “electronic persons” accountable when they violate the rights of others outweigh the highly precarious moral interests that AI legal personhood might protect. Introduction Fiction abounds with artificial human-like characters: robots, clones, and bioengineered humanoids. But fiction dwells on artists' conceptions of the human condition, and the contexts in which that condition might or might not be altered. Human-like artefacts are no longer fiction, and humanity is now confronted by the very real legal challenge of a supranational entity considering whether to attribute legal personality to purely synthetic intelligent artefacts. The European Parliament has asked the European Commission to write legislation addressing the forthcoming challenges of artificial intelligence (AI)-a sensible and timely suggestion. Here we address only one aspect of that proposal the recommendation that the legislature should consider: ''creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently.'' The language concerning ''electronic persons'' indicates a clear intent to confer on some intelligent artefacts legal-person status, such as is also enjoyed by most humans. In this article, we ask whether a purely synthetic entity could and should be made a legal person. Drawing on the legal and philosophical framework used to evaluate the legal personhood of other non-human entities like corporations, we argue that the case for electronic personhood is weak. Though this article begins with philosophical premises, its orientation is ultimately pragmatic. A legal system by the people exists ultimately to protect the interests of the people. That is to say, the people currently recognized as such. In the absence of some compelling moral necessity, we should consider the likely costs and benefits of any legal change for the people. Welcoming AI to the class of legal persons would be a change. Our purpose here is to identify some costs that that choice would present. We work with a historical concept of legal personhood, as set out in the excellent review of the issue by Solaiman (2017). To summarise Solaiman very briefly, legal personhood extends to the set of entities in any lawfully-regulated society that have rights and obligations under the law. The basic provisions for a legal person are: 1. that it is able to know and execute its rights as a legal agent, and 2. that it is subject to legal sanctions ordinarily applied to humans. Historically, only a relatively small subset of humans 1 would have counted as legal persons. Legal personhood has been extended not only to humans, but also to corporations and (in some countries) idols and environmental objects. Creating a legal status of electronic personhood for purely synthetic intelligent entities would require that such entities could fruitfully satisfy Solaiman's second criterion. We argue here that it is far from clear that artefacts could or should be designed so as to acquire this status. We begin our article by demonstrating the timeliness and immediacy of our concern that robots might be made legal persons. Proposals for creating synthetic personhood are already on the table, and there are sufficient legal tools in place to implement them. We advise caution and reflection on the problems that have arisen in the past with novel legal persons. While not always a zero-sum game, sometimes extending the class of legal persons can come at the expense of the interests of those already within it. In the past, creating new legal persons has sometimes lead to asymmetries and corruptions such as entities that are accountable but unfunded, or fully-financed but unaccountable. Ultimately this means weakening the legal protections for humans vis-a-vis synthetic persons. Next we consider whether there are moral benefits to offset these risks or costs, such as achieving necessary moral objectives. We suggest that there is no moral obligation to recognize the legal personhood of AI. We recommend against the extension of legal personhood to robots, because the costs are too great and the moral gains too few. 2 Why concern ourselves with legal personality and AI now? Academics have written for some years about the possibility of attributing legal personality to robots, 2 e.g. Asaro (2007); Koops et al. (2010) and Solaiman (2017). So the idea is not new. It gained considerable currency, however, after the Committee on Legal affairs of the European Parliament on 20 January 2015 established a Working Group for legal questions related to the development of Robotics and Artificial Intelligence. On 27 January 2017, the Committee put forward a Motion for a European Parliament Resolution in respect of robotics and artificial intelligence. On 16 February 2017, this Motion was adopted as the Civil Law Rules on Robotics. Press reports give the impression that the Motion contains ''comprehensive rules for how humans will interact with artificial intelligence and robots'' (Wakefield 2017;Sulmont 2017). It does not. There are no legally binding decisions, and the proposals it makes are not in the form of rules, much less comprehensive ones. As a recommendatory text, the Motion identifies lines for future development, for example to create a registry of ''smart robots,'' to use the United Nations to set regulatory standards, to allocate public money to study the ''social and ethical challenges'' of advanced robotics, and so on. Of particular concern here, the Motion also suggested that European law might someday attribute legal personality to robots. 2 Most authors focus on robots rather than general purpose artificial intelligence (AI), presumably because robots are easier to identify with or feature more sympathetically in fiction. Here we take intelligence to be a process for doing the right thing at the right time, where what is 'right' depends on context; and a robot to be an artefact that perceives and acts in the analogue, physical world, in contrast to software that operates only in the context of a digital artefact. The legal lacuna of synthetic persons 275 The Civil Law Rules are cautious and non-committal on the question of whether robots should be legal persons. Nevertheless, calling on the European Commission to consider the place of robotics in the European legal order gives the question unprecedented stature. Paragraph AB, in the introductory recitals of the Motion, says as follows: ''[T]he more autonomous robots are, the less they can be considered simple tools in the hands of other actors (such as the manufacturer, the owner, the user, etc.); ...this, in turn, questions whether the ordinary rules on liability are insufficient or whether it calls for new principles and rules to provide clarity on the legal liability of various actors concerning responsibility for the acts and omissions of robots... '' 3 The paragraph goes on to say: ''[U]ltimately, the autonomy of robots raises the question of their nature in the light of the existing legal categories or whether a new category should be created, with its own specific features and implications.'' 4 As adopted, these recitals do not prescribe a particular future status for ''autonomous robots.'' They are without prejudice to whether European law should attribute legal personality to them. Nevertheless, to identify as ''fundamental'' the question ''whether robots should possess a legal status'' strongly implies that the door is open to that innovation. As part of a list of ''general principles concerning the development of robotics and artificial intelligence for civil use,'' the Motion draws further attention to the possibility of attributing legal personality to robots. The Motion in particular calls on the European Commission ''when carrying out an impact assessment of its future legislative instrument [on robots], to explore, analyse and consider the implications of all possible legal solutions.'' In Paragraph 59, it includes giving robots ''the status of electronic persons'' among ''possible legal solutions.'' Again, the Motion is not a statement of law-in-force. Nor does the Motion espouse a particular solution. It does, however, call on the European Commission to ''explore'' the attribution of legal personality to robots as a possible solution. Invoking the expressions ''electronic persons'' and ''electronic personality,'' it gives the idea a higher profile than ever before. The idea of legal personality and AI accordingly merits particular scrutiny at this time. 3 Legal persons: fictive, divisible, and not necessarily accountable Before we can talk sensibly about legal personality for robots, we need to know what the expression ''legal personality'' means in general. Legal personality is a term of art in legal scholarship and practice. Jurists in multiple countries have set out definitions of it. This one, from the Yale Law Journal in 1928, is serviceable: ''To be a legal person is to be the subject of rights and duties. To confer legal rights or to impose legal duties, therefore, is to confer legal personality...'' (Smith 1928, p. 283). This definition is congruent with Solaiman's characterization of legal personhood, discussed above. Three observations about legal personality, so defined, are pertinent to the question of a possible electronic legal personality. First, legal personality is an artifice. When we say that an actor has legal personality, we mean that a legal system addresses its rules to the actor, both to give the actor rights and to subject it to obligations. Legal personality is not necessarily correlated with a metaphysical or ethical notion of personhood. While we should want our legal system to bear the metaphysical and ethical concepts in mind, at different times legal systems have conferred legal personhood on much less and much more than the set of metaphysical or ethical persons. Legal personality results from a legal system's decision to recognize that a particular entity has it. We may thus think of legal personality as a kind of fictional status, which the law may confer when doing so suits its ends. Second, legal personality is an aggregate of legal rights and obligations, and thus it is divisible. Legal people need not possess all the same rights and obligations, even within the same system. A legal system might treat a given actor as a legal person in respect of some rights and some obligations but not in respect of others. It may even be helpful to think of legal personhood as a scalar concept, so that an entity can be more or less of a legal person as it possesses more or fewer rights and obligations. Third, the legal personality of an actor, even if it entails that the actor has extensive rights and obligations, does not necessarily entail the actor's effective engagement with the legal system. Though the actor may be the beneficiary of certain rules that give it rights, or the addressee of others that impose obligations on it, this does not in itself tell us what opportunities the legal system provides to that actor to take advantage of the rules or to other actors to hold it to account for breaches. That is to say, the rights and obligations that a legal person may have as a matter of law may not match those it has as a matter of fact. We now consider in detail how each of these observations about legal personality bears on the possibility of extending legal personhood to robots. Legal personality is a fiction of a given legal system An entity's inherent characteristics do not determine whether it is a legal person. It is true that legal systems are less likely to confer legal personality on inanimate objects, and more likely to confer it on entities that are people in the ethical and The legal lacuna of synthetic persons 277 metaphysical sense. This may be because most legal systems wish to recognize and give effect to the rights and obligations that true people possess. But this rough generalization can be misleading. To determine whether an entity is a legal person, one must look to the approach a given legal system takes toward it. Because of the rough generalization that legal people are in fact people, that the legal rights and obligations correspond to real rights and obligations, it is natural to think of legal personality as a fiction pretending to be something real. When a legal system confers legal rights and obligations on an entity, it has determined to treat that entity as though it were a person in fact. It is a kind of pretense in which legal systems can decide to engage, regardless of whether an entity really is a person (See examples in Solaiman 2017, pp. 3-4). Calling legal personality ''a fiction'' does not mean that it lacks real effects. To the contrary, the purpose of conferring legal personality on an actor is to enable that actor to have certain effects in, and to be affected in certain ways by, the legal system. Every legal system must decide to which entities it will confer legal personhood. Legal systems should make this decision, like any other, with their ultimate objectives in mind. The most basic question for a legal system with respect to legal personhood is whether conferring legal personhood on a given entity advances or hinders those objectives. Those objectives may (and, in many cases should) be served by giving legal recognition to the rights and obligations of entities that really are people. In many cases, though, the objectives will not track these metaphysical and ethical truths. Sometimes legal personhood may be denied to real people in order to serve odious ends, like perpetuating privileges for some smaller group of people. Other times, a legal system may grant legal personhood to entities that are not really people because conferring rights upon the entity will protect it or because subjecting the entity to obligations will protect those around it. In this regard, the discourse and practice of recognizing legal personhood fits the kind of structure that philosophers call fictionalism. A domain of discourse is fictionalist if it seeks to represent something other than the literal truth (Eklund 2011). Participants in a fictionalist discourse engage in a sort of pretense (whether wittingly or not) by assuming a stance according to which things said in the discourse, though literally false, refer to real entities and describe real properties of these. Discourse about fictional narratives is one easy example. When someone asks whether Daenerys Targaryen has two or three dragons, they are not asking after some fact in the world. Rather, they mean to ask whether the statement is true within the fiction Game of Thrones. Many modern philosophers think fictionalism offers the best account of some familiar domains of discourse, from math (e.g.Field 1989), to morality (e.g. Joyce 2001), to truth (e.g. Burgess and Burgess 2011). When they argue that these domains of discourse are fictionalist, philosophers take on the burden also of saying why we would go to the effort of earnestly saying things that are literally false. Usually, this involves giving an account of why the discourse is useful-e.g., talk of fictional narrative is fun, talk of numbers allows us to build airplanes, and talk of morality allows us to organize socially. In the legal context, there is a long history of conferring legal personhood on corporations, and recognizing that the discourse surrounding corporate legal personhood is fictional. The United States has perhaps the most thoroughly developed legal discourse on the matter. Under U.S. federal law, the term person is defined to include corporations. 5 Participants in the legal system recognize that the discourse surrounding corporate personhood is fictional. As the U.S. Supreme Court wrote, ''[T]he corporate personality is a fiction, although a fiction intended to be acted upon as though it were a fact...'' 6 Scholars for the most part take such statements at face value (Dewey 1926, pp. 655-73;Laufer 1994, pp. 647, 650). Creating a fictional discourse according to which corporations are people was a useful shorthand for conferring on them the legal rights and obligations possessed by human people within the legal system. These include, for example, the corporate right to bind others through contract and the corporate obligation to satisfy commitments under contract. Without an extensive suite of rights and obligations characteristic of legal personhood, corporations could not be the engines of economic progress they have become. Sometimes legal systems will even confer legal personality on an ad hoc basis to individual entities. This happened, for example, with the Bank for International Settlements. In a case involving claims against the Bank, an arbitral tribunal noted that the international instruments that created and empowered the Bank-part of a Convention concluded in 1930 by Germany, Belgium, Great Britain, Italy, Japan and Switzerland-confirmed that the Bank was to be an international law entity. The arrangement was novel, a company limited by shares and, apparently, generally recognized as a person under international law. Some of the participants doubted that this was legally tenable, and so they set up a rather tangled structure to give the Bank a Swiss law status-even as Swiss law was expressly not the Bank's governing law for its most important purposes. 7 The Bank was intended to be an international legal person, and the states participating in the Bank communicated their intention by adopting a treaty. 8 The Bank's personality was confirmed (the tribunal went on to observe) by explicit statements in other international agreements. 9 We are concerned here about possible future cases concerning the legal personality of robots. Some academic writings about robot legal personality address questions of personhood in other than a legal sense, e.g., what does it take to constitute a person in a social, biological or even theological sense (Foerst 2009). Legal personality, however, results from a decision in the legal system to confer legal personality on a given entity. This decision may, but need not, be informed by the status of robots as persons vis-a-vis these non-legal senses. Legal personality is a highly elastic concept. The range of actors on which a system might confer legal 5 Dictionary Act, 1 U.S.C. personality is large, a point understood since at least the 1930s (see Nékám 1938, p. 34). The European Parliament Motion of 27 January 2017 to consider the possibility of conferring legal status on robots, accordingly, is not trivial. Nothing in the character of legal systems as such forecloses the possibility, and there is significant precedent to enable it. Legal personality is divisible Legal personhood is not an all-or-nothing proposition. Since it is made up of legal rights and obligations, entities can have more, fewer, overlapping, or even disjointed sets of these. This is as true of the legal personhood of human beings as it is for nonhuman legal persons. Every legal system has had, and continues to have, some human legal persons with fewer legal rights and different obligations than others. The world-wide struggle for equal rights for women, ethnic and religious minorities, and other disadvantaged groups in many nations bears continuing witness to this fact. The disparity is not always invidious; sensible policy can ground different rights and obligations (in some ways more, in others less) for non-citizens, felons, and children (Asaro 2007, p. 3). As discussed above, legal systems can confer legal personhood on non-human entities. In almost every case, these will have both fewer rights and fewer obligations. Consider the legal personhood that environmental features now have in several countries-the Whanganui river and Te Urewera national park in New Zealand (Rousseau 2016), the Ganges and the Yamuna rivers in India (Safi 2016), and the entire ecosystem in Ecuador. 10 Of necessity, the legal rights and obligations accorded to these environmental features differ from those given by their respective nations to human beings. In the case of the Whanganui River, for example, the primary concern was to ensure the rights of the river not to be owned (Calderwood 2016). Corporations in the United States may be the legal persons with the suite of legal rights and obligations most closely approximating those given to human beings. A detailed constitutional jurisprudence has grown around the issue. While the U.S. Supreme Court seems on track to affirm that corporations have nearly every constitutional right and obligation, it has balked in some rare instances, such as the right against self-incrimination at criminal trial. 11 In some cases, courts have had to address the divisibility of legal personhood head-on. The General Assembly, in 1948, asked the International Court of Justice whether the UN had the capacity to bring an international claim against a State. The Court advised in the affirmative. In so advising, the Court drew attention to the varied character of persons in a legal system: ''The subjects of law in any legal system are not necessarily identical in their nature or in the extent of their rights, and their nature depends upon the needs of the community... [T]he [UN] is an international person. That is not the same thing as saying that it is a State, which it certainly is not, or that its legal personality and rights and duties are the same as those of a State... Whereas a State possesses the totality of international rights and duties recognized by international law, the rights and duties of an entity such as the [UN] must depend upon its purposes and functions as specified or implied in its constituent documents and developed in practice.'' (Liang 1949) The Court understood that legal personality is a divisible concept. It is not necessary in any legal system for there to be one uniform and unified status of legal person. The divisibility of legal personhood raises the question of which rights and duties a legal system should confer on a legal person, once it has decided to recognize the legal person as such. We should resolve the issue of the legal personhood of robots at this level, rather than treating legal personhood as an all-or-nothing black box (Koops et al. 2010, p. 556). Edsger Dijkstra has noted, ''A convincing demonstration of correctness being impossible as long as the mechanism is regarded as a black box, our only hope lies in not regarding the mechanism as a black box'' (Dijkstra 1970). A legal system, if it chose to confer legal personality on robots, would need to say specifically which legal rights and obligations went with the designation. If it does not, then the legal system will struggle, as happened with the Bank for International Settlements, to make sense of what it has done. To try to confer ''legal personality,'' without being more specific, is to regard legal personality as a black box. In line with the fictionalist paradigm, and as the ICJ opined with respect to the UN, the legal system should determine the legal rights and obligations of a new legal person by reference to how the legal person relates to the legal system's purposes. The gap between de jure and de facto legal personality Even once a legal system has determined which rights and obligations to confer on a legal person, practical realities may nullify them. Legal rights with no way to enforce them are mere illusion. Standing-the right to appear before particular organs for purposes of presenting a case under a particular rule-is crucial to a legal person seeking to protect its rights in the legal system. Standing does not necessarily follow from the existence of an actor's legal personality. An entity, even when its legal personality is not in doubt, must exercise its standing before it can avail itself of relevant procedures (Vollenhoven et al. 1926). When an entity tries to invoke newly conferred rights, challenges to its standing are all the more likely (Shah 2013). Consider the legal right of ''integral respect'' that Ecuador gave to its ecosystem. While the ecosystem may have the right as a matter of law, it clearly lacks the nonlegal capacities it would need to protect the right from encroachment. To effectuate the right, the Ecuadorian constitution gave standing to everyone in Ecuador to bring suits on behalf of the ecosystem. Thus, in 2011, private Ecuadorians successfully sued the Provincial Government of Loja to halt expansion of a roadway that was damaging an important watershed (Greene 2011). The outcome would have been very different if Ecuador had provided no mechanism for protecting nature's legal right of integral respect. Nature cannot protect itself in a court of law. Just as legal rights mean nothing if the legal system elides the standing to protect them, legal obligations mean nothing in the absence of procedure to enforce them. The advisory opinion of the ICJ establishing that the UN has legal personality was in 1948, but this resolved only whether the UN could bring a claim. It said nothing about an obvious correlate: the legal capacity of the UN to bear responsibility and answer for its own breaches. Affirmation that the UN indeed can be responsible for its breaches did come-but over half a century later (Wickremasinghe and Evans 2000, para. 66). Despite the efforts of international lawyers, there is still no reliable procedure for suing an international organization. 12 We could never anticipate ex ante all the ways purely synthetic legal people would interact with other legal persons and with the institutions of the legal system (courts, administrative agencies, legislatures, police, etc.). In its first encounters with the legal system, every rule invoked on a robot's behalf or against it would require novel and controversial developments in law. Courts and other organs would struggle to decide how, if at all, the rules-heretofore addressed to other legal persons-address the robot. Both the robot's standing against other actors and other actors' standing against the robot would be sharply contested. If the topic of electronic personality is to be addressed, as directed in the European Parliament's 27 January 2017 Motion, standing-both of robots and other purely synthetic entities to sue and of others to sue them-is a further matter that would need to be considered. Summary The intricacies described in this section are not just inevitable 'bugs' to be eventually worked out. They are crucial questions that we must answer before introducing novel legal personhood. Concerns about legal accountability, and the way electronic persons might affect accountability, are our main motivation in writing this paper. We now turn to consider the impacts of offering some form of personhood status to robots. consequentialist framework. We should do the same for each of the divisible legal rights and obligations at issue for robot legal personhood. A full treatment of the advisability of conferring legal personhood on robots would step methodically from one legal right or obligation to the next. Our primary concern in this paper is to raise a cautionary flag in the face of what seems to be international enthusiasm for extending legal personhood to robots. Elon Musk has recently renewed his apocalyptic predictions about the ''existential risk'' AI poses to human beings (Domonoske 2017). Our concern is somewhat different, and arises internally to legal systems and how purely synthetic legal persons would interact with human legal persons. Robotic legal personhood raises concerns about a sort of abuse within the legal system: While robot legal persons would enjoy a host of rights against human legal persons, it is unclear how corresponding legal obligations could be enforced against them. A crucial step in the analysis will be to specify what are the purposes of the legal system in relation to which robot legal personhood should be assessed. Legal systems can be presumed to serve many purposes, and any claim as to what those are is sure to be deeply controversial. Cast at a general enough level, though, much of the controversy about the purposes of legal systems should dissipate. To that end, we claim that the basic purposes of human legal systems are: 1. to further the material interests of the legal persons it recognizes, and 2. to enforce as legal rights and obligations any sufficiently weighty moral rights and obligations, with the caveat that 3. should equally weighty moral rights of two types of entity conflict, legal systems should give preference to the moral rights held by human beings. We think this statement of purpose reflects the basic material and moral goals of any human legal system, with what we hope will be an uncontroversially light thumb on the scale in favour of human interests. Yes, this is speciesism. But a kind that allows for deference to the weighty interests of other entities, via the mechanism of human investment in those entities (cf. the Solaiman 2017 discussion of idols). If there is even the faintest shadow of truth to Musk's prediction, a much stronger version of speciesism would be justified vis-a-vis AI. However, the weaker statement here suffices to make our arguments below. Robot legal personhood as a moral imperative If robots have, or were on course to acquire, moral rights, then granting them legal personhood by conferring some legal rights would further the purposes of legal systems. But there is great room for skepticism about first the possibility of ever designing robots that would hold moral rights, and second whether that possibilitywere it to exist-should be realised. The very grounds of moral rights is highly uncertain for any kind of entity. Some academics suggest that consciousness could be the litmus test for possessing moral rights. Consciousness itself is a tricky notion, and scholars frequently conflate numerous disjoint concepts that happen to be currently associated with the term The legal lacuna of synthetic persons 283 conscious (Dennett 2001(Dennett , 2009). In the worst case, this definition is circuitous and therefore vacuous, with the definition of the term itself entailing ethical obligation (Bryson 2012). If we could settle on a universal metric for moral patiency, that metric could inform whether and when we should give robots legal personhood. At present, any plausible metric should tell against synthetic legal personhood-there is no widespread acceptance that current robots can consistently satisfy these metrics. Nevertheless, many consider that AI will progress to the point that it can pass any behaviourally observable metric for human-like consciousness. Note that the commonly-suggested Turing Test-requiring that a person interacting with an entity over a communications device mistakes them for human-is already routinely passed at least for limited periods by AI. If AI became capable of mimicking human intelligence, the tides may shift as academics and laypeople alike will come to identify empathetically with robots. For some-the transhumanists, who see technology as a mechanism to become themselves superhuman, even immortal (Goertzel 2010;Geraci 2010)-the identification will be even more immediate. Some even self-identify as robots already. But there is no guarantee or necessity that AI will be developed in this way. It is far from clear that such an AI system would be desirable, and some scholars have suggested that designing such AI would be immoral (Bryson 2009). There is no inevitable point at which AI systems must replicate their makers in becoming functionally similar to human adults. We can therefore ask the question whether such an effort should be attempted. Two options are that it could, like human cloning, be banned altogether; or that human-like AI development should be limited to small-scale, individual, artisanal work, and in particular not be tenable as legal products or business entities that would require fundamental changes to the law (Bryson 2017). Even if robots were to be constructed on the mass scale and to acquire moral rights, this would not fully settle the question of whether the law should recognize them as legal persons. Legal systems are flexible as to what actors they confer legal personality upon, and they need no evidence of supposed inherent qualities of an actor in order to do so. 13 Similarly, the inherent qualities of non-human entities do not dictate the final word on whether they should be recognized as legal persons. Beforehand, we must also check for potential conflicts between possible legal rights of the non-human entity and those already held to be legal persons. Abuse of legal person status by robots and those that make them As Solaiman (2017) emphasizes, it is important that legal persons have legal obligations as well as legal rights. If robots were recognized as legal persons capable of entering into complex legal relationships with other legal persons, there would inevitably arise situations where the acts of robots would interfere with the rights of humans and other legal persons. Without an obligation to respect the rights of other legal persons, those rights would, at least vis-a-vis robotic actors, be rendered a nullity. The solution may seem clear-impose legal obligations on robots. But legal obligations are meaningless if there is no way to hold robots accountable for them. It is not clear that there is. In seeming recognition of this, the United States Department of Defence has proactively declared in their Law of War Manual 14 that robotic weapons are never responsible legal agents. ''Law of War Obligations of Distinction and Proportionality Apply to Persons Rather Than the Weapons Themselves. The law of war rules on conducting attacks (such as the rules relating to discrimination and proportionality) impose obligations on persons. These rules do not impose obligations on the weapons themselves...The law of war does not require weapons to make legal determinations, even if the weapon (e.g., through computers, software, and sensors) may be characterized as capable of making factual determinations, such as whether to fire the weapon or to select and engage a target... Rather, it is persons who must comply with the law of war...[I]n the situation in which a person is using a weapon that selects and engages targets autonomously, that person must refrain from using that weapon where it is expected to result in incidental harm that is excessive in relation to the concrete and direct military advantage expected to be gained...[T]he obligation...may be more significant when the person uses weapon systems with more sophisticated autonomous functions...'' The concern here is not a necessarily conceptual one. Through very careful planning, we may discover mechanisms by which robots could be held accountable for legal obligations imposed on them. But the planning would have to be careful indeed. Without it, there are two kinds of abuse that might arise at the expense of human legal rights-humans using robots to insulate themselves from liability and robots themselves unaccountably violating human legal rights. Robots as liability shields It is to be assumed that if decision makers in the system say that they are ready to consider the possibility of ''electronic personality,'' then human actors will seek to exploit that possibility for selfish ends. There is nothing objectionable in itself about actors pursuing selfish ends through law. A well-balanced legal system, however, considers the impact of changes to the rules on the system as a whole, particularly so far as the legal rights of legal persons are concerned. We take the main case of the abuse of legal personality to be this: natural persons using an artificial person to shield themselves from the consequences of their conduct. Recognition of robot legal personhood could present unscrupulous actors with such ''liability management'' opportunities. The law has a way to address this kind of difficulty: It can look behind the artificial person and reach a real one. Veil-piercing-i.e., going behind the legal form and helping or (more usually) sanctioning the real people behind the form-is well-known in various legal systems (Huang 2012). A U.S.-Great Britain arbitral tribunal in the 1920s put the matter like this: ''When a situation legally so anomalous is presented, recourse must be had to generally recognized principles of justice and fair dealing in order to determine the rights of the individual involved. The same considerations of equity that have repeatedly been invoked by the courts where strict regard to the legal personality of a corporation would lead to inequitable results or to results contrary to legal policy, may be invoked here. In such cases courts have not hesitated to look behind the legal person and consider the human individuals who were the real beneficiaries. '' 15 The situation had been ''anomalous'' because the Cayuga tribe had legal personality as a corporate entity in New York State but not under international law. That is, the law that the tribunal had power to apply did not recognize the tribe as an entity to which that law could be applied. ''[R]ecognized principles of justice and fair dealing'' came to the rescue: The tribunal addressed the individuals comprising the tribe to get around its inability to address the tribe. Solutions like this are not available in every case. Lawmakers contemplating legal personhood must consider the matter and provide for it. The arbitrators in the Cayuga case had an express invitation to apply equitable principles, the jurisdictional instrument (a treaty) having stipulated equity to be part of the applicable law. 16 Where equity or a similar principle is not part of the applicable law, a judge or arbitrator well might not be able to ''look behind the legal person.'' In a situation like that, the ''human individuals'' who were meant to answer for injury done remain out of the picture. The Tin Council case provides an illustrative warning. The case involved the International Tin Council, a public international organization constituted by a group of states (broadly an entity like the International Bank for Settlements). The states, using the Council, aimed to corner the world market for tin. When the prospects for success looked solid, the Council contracted debts. But the price of tin collapsed, 15 Great Britain (for the Cayuga Indians in Canada) v. USA, Tribunal under Special Agreement of 18 August 1910 (Nerinex, President;Pound & Fitzpatrick, Arbitrators), Award, 22 January 1926, (1955 6 Reports of International Arbitral Awards 173, 179. To similar effect a little later, see Shufeldt Claim (USA/Guatemala), (Sisnett, Arbitrator), Decision, 24 July 1930, (1949 2 Reports of International Arbitral Awards 1083, 1098. (''International law will not be bound by municipal law or by anything but natural justice, and will look behind the legal person to the real interests involved.'') 16 Ibid. and the Council went insolvent. When the creditors sought to sue and collect what they could on the debts, they found an empty shell and no procedural recourse. The Tin Council could not be sued in English court, and it would have been useless to sue anyway. The Council's creditors sought compensation from the member states, but this was to no avail either: The creditors' contractual relationship was with the Council, not with those who had called it into being. Apart from the possibility of a diplomatic solution-i.e., the states agreeing ex gratia to replenish the Council or pay the creditors-the creditors had no recourse. 17 A difficulty in the Tin Council case was that the legal relations involved were novel, and so the court's precedents offered no guide for effectuating the creditor's rights: ''None of the authorities cited by the appellants [the creditors] were of any assistance in construing the effect of the grant by Parliament of the legal capacities of a body corporate to an international organization pursuant to a treaty obligation to confer legal personality on that organization. '' 18 Nor did the creditors adduce ''any alleged general principle'' in the English law sources that would have allowed the court to pierce the veil and attach liability to the states that had constituted the Council. 19 As for international law, ''[n]o plausible evidence was produced of the existence of such a rule of international law'' (i.e., a rule holding the constituents of the Council responsible for the Council's debts). 20 In short, unlike the tribunal in the Cayuga claims, the House of Lords found no way to avert ''inequitable results.'' The unusual and novel character of the entity led the court to a dead end. Even when the law does explicitly provide for veil piercing, judges and arbitrators have tended to apply it cautiously and as an exception. Easterbrook and Fischel (though defending the economic rationale for veil piercing) memorably described veil piercing as happening ''freakishly''; they likened it to ''lightning... rare, severe, and unprincipled'' (Easterbrook and Fischel 1985). The Tin Council case foreshadows the risk that electronic personality would shield some human actors from accountability for violating the rights of other legal persons, particularly human or corporate. Without some way around that shield, we would surely see robots designed to carry out activities that carry high legal risk for human or corporate legal persons. Though this might benefit the humans behind the robots, it would come at the expense of human legal interests more generally. Robots as themselves unaccountable rights violators Even if the legal system sensibly provided mechanisms for veil piercing in the case of robot legal persons, that solution could only go so far. By design, collective legal persons like corporations and international organisations have legal persons behind them, who might stand to answer for violations of the rights of human legal persons. Advanced robots would not necessarily have further legal persons to instruct or control them. That is to say, there may be no human actor directing the robot after inception. The principal-agent model that veil piercing rests upon would then be hard to apply. Autonomous or semi-autonomous robots interacting with humans will inevitably infringe the legal rights of humans. Giving robots legal rights without counterbalancing legal obligations would only make matters worse. In the conflict between robot and human legal rights, only the former would be answerable to the latter; humans would have no legal recourse. This would not necessarily be a problem, if 1. the other problems of legal personality-like standing and availability of dispute settlement procedures-were solved; and 2. the electronic legal person were solvent or otherwise answerable for rights violations. But it is unclear how to operationalize either of these two steps. In the case of corporate legal persons, humans composing the corporation can manage dispute settlement on behalf of the corporation in which they have an interest. But what we are imagining here is a robot legal person, untethered from an interested human principal. Who will represent the robot in the dispute? With the right AI, the robot might be able to represent itself. But we may encounter this problem well before AI capable of effective court advocacy is developed. Conceivably, the robot could hire its own legal counsel, but this brings us to the second step: robot solvency. It is unclear what it would mean for a robot to hold assets, or how it would acquire them. It is possible that the law could contemplate mechanisms for robots to own property or hold accounts, as it does for corporate legal people. The law could also require the creators of robots to place initial funds in these accounts. But money can flow out of accounts just as easily as it can flow in; once the account is depleted, the robot would effectively be unanswerable for violating human legal rights. When insolvent human legal persons violate others' legal rights, other tools are available to hold them to account-anything from apology to jail time. In the case of robots, these options are unavailable, unsatisfying, and/or ineffective. Good faith efforts, like designing robots in order to avoid infringement of human legal rights, would not solve all the problems either. A machine made to endeavour to avoid breaches of legal obligation still would present risks. Any actor in society will encounter frictions and mischances resulting in legal incident. This is an unavoidable feature of the complex legal and social space that proponents of robot legal personhood would have robots enter. Conclusion We have shown that it is completely possible to declare a machine a legal person. The impulse to do so exists both at the individual level with academic proponents, and at the level of international governance with the European Parliament recommending consideration. We have also argued here that conferring legal personality on robots is morally unnecessary and legally troublesome. While it may, either now or in the future, have emotional and economic appeal, so do many superficially desirable hazards against which the law protects us. The basic concern is for protecting human and corporate legal rights against abuse by-or more accurately, by exploiting-robots. Trying to hold an electronic person to account, claimants would experience all the problems that have arisen in the past with novel legal persons. There almost inevitably would arise asymmetries in particular legal systems, situations like that of the investor under investment treaties who can hold a respondent party to account but under the same treaties is not itself accountable. Future claimants, if they were to sue an electronic person, likely would confront the accountable but empty, like the International Tin Council; the fully-financed but unaccountable, like the United Nations; and sui generis arrangements like the Bank for International Settlements that novel legal persons tend to instigate. Perhaps a robot could be likened to a force of nature-a storm or avalanche. But this would not be satisfactory either: Natural forces are not legal persons. They affect our legal relations, but we do not speak of them as having legal relations. The electronic person by contrast, would engage in some or all of the legal relations available under the legal system, and yet, for those with whom it transacts or third parties whom it encounters, it would be difficult to hold to account. We have insurance schemes to address floods and fires. You can sue its owner if a dog bites you. The constituent states of the Tin Council, if the court had been willing to pierce the veil, would have stood exposed to the debts it had accrued. An electronic person by contrast might prove to be a legal black hole, an entity that absorbs a human actor's legal responsibilities and from which no glint of accountability is seen. Unfortunately, there is no question that such a readily-manufacturable legal lacuna would be exploited as a mechanism for avoiding and displacing legal liabilities and obligations. It could be in theory that the benefits justify the costs of introducing purely synthetic persons to a legal system. Both need to be considered with proper care before moving further toward such an innovation. But in summary of our own investigation, we find the idea could easily lead to abuse at the expense of the legal rights of extant legal persons. We currently have a legal system that is, first and foremost, of, for, and by the (human) people. Maintaining the law's coherence and capacity to defend natural persons entails ensuring that purely synthetic intelligent entities never become persons, either in law or fact. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Stability data, irregular connections and tropical curves We study a class of meromorphic connections $\nabla(Z)$ on $\mathbb{P}^1$, parametrised by the central charge $Z$ of a stability condition, with values in a Lie algebra of formal vector fields on a torus. Their definition is motivated by the work of Gaiotto, Moore and Neitzke on wall-crossing and three-dimensional field theories. Our main results concern two limits of the families $\nabla(Z)$ as we rescale the central charge $Z \mapsto RZ$. In the $R \to 0$"conformal limit"we recover a version of the connections introduced by Bridgeland and Toledano Laredo (and so the Joyce holomorphic generating functions for enumerative invariants), although with a different construction yielding new explicit formulae. In the $R \to \infty$"large complex structure"limit the connections $\nabla(Z)$ make contact with the Gross-Pandharipande-Siebert approach to wall-crossing based on tropical geometry. Their flat sections display tropical behaviour, and also encode certain tropical/relative Gromov-Witten invariants. 1. Introduction 1.1. Over the last few years a number of different results have appeared that link stability data on a class of graded Lie algebras [KS], Stokes factors for irregular meromorphic connections [J5], [BT1], [GMN1] and tropical/Gromov-Witten invariants [GPS]. The purpose of this paper is to make some progress towards unifying some of these results. Starting from a continuous family of stability data on a certain infinite-dimensional Poisson Lie algebra, we will construct a family of irregular meromorphic connections on P 1 choice of stability data, as well as a precise relation with [BT1], are discussed in Section 5. Following [GMN1], a particular scaling limit of ∇(RZ) as R → 0 should be related with ∇ BT L (Z), and in Section 6 we confirm this expectation, proving that they coincide up to a complex gauge transformation. In Section 7 we will show in a precise sense that flat sections of ∇(RZ) display tropical behaviour in the limit R → ∞, and relate this behaviour to the tropical invariants which play an important role in [GPS]. Part of the material in Sections 4 and 7 originally appeared (in a different form) in the preprint [FS] (the present paper supersedes a large part of that work). Sections 8 and 9 have a more differentialgeometric flavour, and are devoted to a comparison with tt * -type connections. In Section 8 we discuss the main example where the tt * -type connections have been constructed rigorously, relating (1.1) to ∇ GM N (Z) in that case. The subtleties in taking the R → 0 scaling limit in the differential-geometric situation are explained in Section 9. Acknowledgements. We thank Gilberto Bini, Tom Bridgeland, Ben Davison, Tamas Hausel, Alessia Mandini, Luca Migliorini, Tom Sutherland, Szilard Szabo and Michael Wong for helpful comments and discussions. This research was partially supported by ERC Starting Grant 307119, theÉcole Polytechnique Fédéral de Lausanne and the Hausdorff Institute for Mathematics, Bonn (JHRT Mathematical Physics). Background and main results In this section we give a brief outline of the ideas on which this paper builds, starting with the notion of stability data on a graded Lie algebra, and sketching how it is related to meromorphic connections and tropical invariants. At the same time we will also present our main results in an informal way. 2.1. Fix a rank n lattice Γ, and let g = γ∈Γ g γ denote a Γ-graded Lie algebra over Q. The space of stability data Stab(g) on g is a complex n-dimensional manifold introduced by M. Kontsevich and Y. Soibelman in [KS]. Its points are given by pairs (Z, a) were Z : Γ → C is a group homomorphism and a : Γ\{0} → g is a map of sets which preserves the grading (that is, such that a(γ) ∈ g γ ). Additionally one requires that (Z, a) satisfy the support property ||γ|| ≤ C|Z(γ)| (2.1) whenever γ ∈ Supp a, that is, a(γ) = 0, for some arbitrary norm on Γ ⊗ Z C and some constant C > 0. In particular the set {Z(γ) : γ ∈ Supp a} ⊂ C is discrete. In algebro-geometric applications, Z is the central charge for a stability condition on a category C with Grothendieck group Γ, and the element a(γ) ∈ g γ corresponds to a count of Z-semistable objects of class γ. The Lie algebra g is tipically infinite-dimensional and the support property is a quantitative analogue of the fact that the central charge of a semistable object should not vanish. In general Stab(g) has an uncountable number of paracompact connected components. 2.2. While the definition of Stab(g) as a set might look a bit arbitrary at first sight, what really matters is the topology with which it was endowed in [KS,Section 2.3], that is, what it means to have a continuous family of stability data on g. This topology is essentially characterised by the following two properties. The first is simply that the projection (Z, a) → Z on Hom(Γ, C) is a local homeomorphism. The second property is formulated in terms of a (pro-nilpotent) Lie group G with Lie algebra g; the completion of g with respect to the grading (see Section 3). Given stability data (Z, a) and a ray ℓ ⊂ C * we define a group element If a family (Z t , a t ) of stability data on g parametrised by [0, 1] is continuous at t 0 , then the inequality (2.1) holds uniformly in a neighborhood of t 0 and for any strictly convex sector V ⊂ C * such that Z(Supp a t 0 ) ∩ ∂V = ∅ the following group element is constant in this neighborhood This last property is equivalent to Kontsevich-Soibelman's wall-crossing formula and plays a central role in the theory of Donaldson-Thomas invariants. 2.3. The graded Lie algebra which is most relevant for the present paper is an infinite-dimensional Poisson Lie algebra g introduced in [KS, Section 2.5], which may be thought of as the algebra of functions on an algebraic Poisson torus. It is canonically attached to the lattice Γ, upon the choice of an integral skew pairing. Details will be given in Section 3. Continuous families of stability data on g arise naturally in algebraic geometry and mathematical physics. For example, let C denote the category of finitedimensional modules over a finite-dimensional C-algebra A (e.g. the path algebra of a quiver without loops). The Grothedieck group K (C) is lattice of finite rank n, and we can choose Γ = K (C), endowed with the skew-symmetrised Euler form. According to [BT1,J5], this setup leads to a continuous family of stability data on the Lie algebra of derivations of g parametrised by the set of stability conditions on the Abelian category C (which can be identified with H n , H ⊂ C denoting the upper half-plane). In many examples (e.g. for Dynkin quivers, see section 5) this family is in fact induced by a continuous family of stability data on g via the adjoint representation. In this case we have the following positivity property: there is a positive cone K >0 (C) ⊂ Γ (given by the effective classes in K (C)), such that a(γ) vanishes in the complement of K >0 (C). In other words Γ admits a natural positive basis, such that if a(γ) = 0 then γ is a non-negative linear combination of the basis vectors. Furthermore, the image Z(γ) for all γ ∈ Γ with a(γ) = 0 is contained in H: for a stability condition on C we have Z(K >0 (C)) ⊂ H. Similar constructions appear in the physics literature, e.g. in the context of compactifications of certain N = 2 supersymmetric quantum field theories in four dimensions ( [GMN1], [GMN2]), as well as in the theory of complex integrable systems [KS,KS2]. An important feature of the stability data which appear in physics is that the images Z(γ) for a(γ) = 0 are never contained in a single half-space, since one requires a(γ) = a(−γ) ("every BPS particle has a CPT conjugate antiparticle"). In the sense of [KS,Definition 2], this corresponds to symmetric stability data. As we will see, we find here one of the crucial differences between the algebro-geometric setup of [BT1] and the one which is more relevant for applications to differential geometry. 2.4. A key idea in the subject (with many closely related variants) is that continuous families of stability data are related with systems of differential equations. In the case of interest for the present work, stability data parameterise meromorphic connections on a homolomorphically trivial principal bundle on the unit disc ∆, with a single order 2 pole at the origin. The collection of group elements S ℓ , indexed by rays ℓ ⊂ C * , define the generalized monodromy (i.e. Stokes factors [B2]) for these connections. The relevant structure group is Aut * ( g), the group of automorphisms of g as a commutative associative algebra, and each S ℓ ∈ G is regarded as an element in this group using the adjoint representation. In this setting, the topology of Stab(g) has a very natural interpretation. Let (Z t , a t ) be is a continuous family of stability data on g, and suppose that we can indeed find a suitable family of irregular meromorphic connections ∇(Z t , a t ) as above, whose generalized monodromy is given by the collection S ℓ (Z t ), with Stokes rays ℓ γ (Z t ) = −R >0 Z(γ) (for γ ∈ Γ). Then the continuity condition (2.3) precisely says that the family ∇(Z t , a t ) is isomonodromic: i.e. their generalized monodromy at 0 is constant. A solution to the problem of constructing isomonodromic families of connections parameterised by continuous stability data was found by Bridgeland and Toledano Laredo in [BT1], working in the setup of Abelian categories of modules described above. The main difficulty in solving this problem is that it involves calculating and inverting the monodromy map, which is a highly trascendental object. To explain the result in our setup, we assume that we have a continuous family of stability data (Z ′ , a) parametrised by some open set U ⊂ Hom(Γ, C), which satisfy the positivity property: Γ admits a basis such that a(γ) = 0 implies that the coefficients of γ are non-negative. Without loss of generality, we assume that image of the elements of the basis by any Z ′ ∈ U lies in the lower-half plane −H. Setting Z = −Z ′ , we will denote by ∇ BT L (Z) the associated Bridgeland-Toledano Laredo isomonodromic family. This is in fact a family of meromorphic connections on P 1 , with a pole of order two at 0 and a simple pole at ∞, of the form (2.4) Here Z and f ∈ g are regarded as derivations of g (as a commutative associative algebra), acting respectively by Z(e α ) = Z(α)e α and f (e α ) = [f, e α ], for e α ∈ g α . According to the main result of [BT1,p. 6], the residue f only has positive graded components, given explicitly by ∇ BT L ). We should notice that the connections ∇(Z) are naturally framed, but we ignore this technicality at this point. Picking a positive basis γ i for Γ, the Aut * ( g)-connection ∇(Z) is uniquely determined by a system of local flat sections X γ i (z) = X(z, Z)(γ i ) given by X(z; Z)e α = e α exp * z −1 Z(α) + zZ(α) − α, where exp * is defined via the standard series for the exponential map and the commutative product on g. The G T (z; Z) are a collection of special z-holomorphic functions with branchcuts, with values in g, indexed by a set of finite trees T (the analogues of the multilogarithms appearing in [BT1]), and the W T (Z) ∈ Γ⊗C are combinatorial weights. We will give explicit formulae for W T and G T is Section 4. Following [Du1,CV2,GMN1], the main tool in constructing the X γ i (z) (and so ∇(Z, R)) is a suitable Aut * ( g)-valued singular integral equation; this replaces the Fourier-Laplace transform and the inversion formula of [BT1], all at once. In the sequel, we will always write ∇(Z, R) for the rescaling ∇(RZ), the extra parameter being a real R > 0; similarly, we will write ∇ GM N (Z, R) for ∇ GM N (RZ). [GMN1,p. 6], there is a candidate scaling limit that seems to take ∇(Z, R) to a connection of the type constructed by Bridgeland and Toledano-Laredo. Namely, one lets z = Rt, getting ∇ t (Z, R) = d − 1 t 2 R −1 A (−1) (RZ) + 1 t A (0) (RZ) + RA (1) (RZ) dt, and then takes the limit R → 0. As we will explain, however, things are not as simple as that, and one needs to be a little careful when taking the limit. It turns out that if we want to stay in the fixed gauge chosen above, then as R → 0 there are divergencies in the coefficients of ∇ t . To get rid of these divergencies we gauge them away with a sequence of gauge transformations g(R), i.e. holomorphic maps C * → Aut * ( g). Remarkably, it is possible to choose the g(R) to be z-constant, so their action on the A (i) is simply As pointed out in (the usual composition of endomorphisms). We will give explicit formulae for the transformations g(R), and prove the following result. Theorem 2.1. The limit g(R) · ∇ t (Z, R) as R → 0 exists and has the form with the same Stokes data as ∇ BT L (Z). It follows that ∇ t actually coincides ∇ BT L (−Z). As we will explain, the last implication follows easily from the uniqueness result proved in [BT2]. The opposite sign is due to different conventions; if ∇ BT L (Z) is defined for Z ∈ H n then ∇(Z, R) is defined on (−H) n . 2.8. Our construction confirms an expected relation between the irregular connections of [BT1] and [GMN1], at least when the latter have been rigorously constructed. The common ground for this relation is a diagram of partially defined morphisms SF(C) where CF (C) is the Hall algebra of constructible functions of the category C and SF (C) is its stacky version [J2], and X is the Lie algebra of complex-valued smooth vector fields on the torus Γ ∨ ⊗ U (1). The map π is a surjective morphism of associative algebras and I denotes the semi-classical limit of an integration morphism to a non-commutative q-deformation of g (composed with the adjoint representation) (see Section 5). The map Ψ corresponds essentially to a Fourier expansion, and will be explained in the main known example when ∇ GM N (Z, R) is well defined, the so called Ooguri-Vafa case, in Section 8. Notice that (2.7) will not quite map to (1.1), because of the issue with Stokes factors for "negative rays" mentioned above: the latter includes only a half of the nontrivial Stokes factors for the former. We will be able to give a version of (1.1) including the single negative Stokes factor in the special Ooguri-Vafa example. In general it is possible to give a version of (1.1) which also includes negative Stokes rays with a nontrivial Stokes factor by working over a different Poisson algebra, such as g [[t]]. The R → 0 scaling limit may also be analysed directly for ∇ GM N (Z, R), and we will do this in Section 9 for the Ooguri-Vafa case. This differential-geometric example displays some interesting features which are hidden for D * ( g)-connections. In particular, one can get rid of divergencies in the R → 0 limit in a different way, by a redefinition of the "energy scale". This limit is different from the one obtained by gauge transformations. In both cases, the limiting connections are not smooth. 2.9. Theorem 2.1 implies that we can recover ∇ BT L from ∇. This alternative construction has some advantages, and carries some more information. Firstly, although the construction of ∇(Z, R) a priori only works for g (rather than some more interesting Ringel-Hall algebra mapping to g), it is arguably more elementary, relying only the basic technique of singular integral equations. On the other hand, as we will see, the construction is entirely based on flat sections X(z) for ∇(Z, R). Our method of constructing ∇(Z) is to show that the limits exist and solve a suitable Riemann-Hilbert factorization problem. Thus we produce a natural system of fundamental solutions X(t) for ∇(Z), for which we give explicit formulae. Moreover the equality between ∇(Z) and ∇ BT L (−Z) can be seen as a weak version of Conjecture 1 in [St]. 2.10. More importantly, the tt * -type connections ∇(Z, R) seem to provide a "geometric enhancement" of ∇ BT L (−Z). To see this, we draw a parallel with the finite-dimensional case [Du1,CV2,Du2]. In that picture, the physics provides a canonical moduli space M of 2-dimensional topological field theories, endowed with the structure of a Frobenius manifold. The main ingredient in this structure is a solution of the WDVV equations, which (according to Dubrovin [Du2]) can be recast as the isomonodromic deformation equations of a connection of the form (cf. (2.4)) for U, V suitable matrices. The Frobenius manifold M corresponds to a particular choice of Stokes data for ∇. The role played by the tt * -connections (2.6) is now to fix a special metric on M : as mentioned earlier, Dubrovin characterizes in [Du1] the tt * -equations of Cecotti-Vafa as isomonodromic deformation equations for (2.6), and the former determine the Zamolodchikov hermitian metric. Informally, the agreement of the Stokes data tells the tt * -connection on which manifold it has to construct the metric. The infinite-dimensional case is, of course, more difficult. Physical considerations in four-dimensional gauge theory led in [GMN1] to a moduli space M, endowed with a structure of complex integrable system over a base B. In [KS] it is conjectured that, for suitable M, a generic choice of base point a ∈ B determines a continuous family of stability data on the poisson Lie algebra g attached to the lattice Γ = H 1 (M a , Z). This family is actually parameterised by an open complex submanifold B * ⊂ B, which according to [BT1] determines an isomonodromic family of connections ∇ BT L (b), for b ∈ B * . In known examples [KS, BS, Su], the continuous family B * ⊂ Stab(g) is a half-dimensional complex (Lagrangian) submanifold of the space of stability data. The connection ∇ BT L is, therefore, very much related to the structure of complex integrable system on M. On the other hand, isomonodromic deformations of the tt * -type connection ∇ GM N are conjecturally linked with a complete hyperKähler metric on this space [GMN1], arising from physical considerations. The idea is that flat sections for ∇ GM N should give holomorphic Darboux coordinates for the twistor family of holomorphic symplectic forms on M (with twistor parameter z). Again, the agreement of the Stokes data seems to tell the tt * -connection on which integrable system it has to construct the metric. 2.11. The geometric origin of ∇(Z, R) appears in the opposite, R → ∞ limit, and turns out to be related to rational tropical curves immersed in R 2 . This is not unexpected, by the following very informal argument. Suppose, for simplicity, that Γ has rank 2 and the skew pairing −, − is nondegenerate. According to [GMN1], R −1 should be thought of as proportional to the volume of the fibres of M → B, with respect to the hyperKähler metric g on the complex surface M. The R → ∞ limit should correspond to the limit of very small fibres, which is mirror to a large complex structure limit ( [GW]). Near the large complex structure limit, the complex structure is obtained from a degenerate limit via "instanton corrections", which encode interesting tropical invariants (see e.g. [Gr]). We will show that a toy model of this behaviour holds for ∇(Z, R). This applies to the special local flat sections X(z; Z, R) for ∇(Z, R). For simplicity, we will only examine the model case when Γ ∼ = Z 2 , generated by γ, η with γ, η = κ > 0, with the simplest nontrivial stability data in a chamber U + in the parameter space U . For example when κ = 1 (2.3) reduces to a simple "pentagon identity", but nevertheless the connections ∇(Z, R) are already far from trivial. Recall that the X(z; Z; R) are constructed using the special functions G T (z; Z, R) (the analogues of multilogarithms in the work of Bridgeland-Toledano Laredo). At a generic point z * ∈ C * the G T (z * ; Z, R) have discontinuous jumping behaviour when Z crosses a certain locus in U , while X(z; Z; R) is continuous. This will enable us to compute how the expansion (2.8) changes across the critical locus, and to related this behaviour to tropical curves and invariants. Fully precise statements of the following results, together with the few basic notions from tropical geometry we need, will be given in Section 7. Theorem 2.2. As Z crosses the boundary of U + ⊂ U from the interior, a special function G T (z * ; Z, R) appearing in the expansion (2.8) for the flat section X(z * ; Z, R) is replaced by a linear combination of the form where we sum over a finite set of trees (not necessarily distinct). The terms corresponding to a single-vertex tree in the sum above are uniquely characterised by their asymptotic behaviour as R → ∞. These leading order terms are in bijection with a finite set of weighted graphs C i , which have a natural interpretation as combinatorial types of rational tropical curves immersed in R 2 . Note that each (type of a) curve C i appears with a sign ε(C i ); it turns out that this sign is determined by the residue theorem. Each tree T appearing in the expansion for X(z * ; Z, R) in the chamber U + defines a pair of unordered partitions deg(T ), whose parts are positive integral multiples of the generators γ, η. In the light of Theorem 2.2 it is natural to identify deg(T ) with a tropical degree w. Theorem 2.3. The sum of contributions ε(C i (T )) = ±1 over tropical types C i , weighted by the coefficients W T in the expansion (2.8) for flat sections in U + , equals a tropical invariant N trop (w) enumerating plane rational tropical curves, times a simple combinatorial factor in Γ ⊗ Q. The tropical invariants N trop (w) equal in fact certain relative Gromov-Witten invariants of weighted projective planes, and play a crucial role in [GPS]. When taking the large R limit we use the specific form of ∇(Z, R) (with double poles at 0 and ∞), and we have not been able to find a similar tropical structure underlying the special functions used in [BT1]. Continuous families of stability data We review in this section some examples of continuous families of stability data in a graded Lie algebra g, and recall the definition of Kontsevich-Soibelman's Poisson Lie algebra [KS] relevant for this work. 3.1. Algebraic groups. The basic finite-dimensional example is given by the Lie algebra g of a complex algebraic group G endowed with the choice of an algebraic torus H ⊂ G. The torus H acts on g by the adjoint action, and g splits into weight spaces g = g 0 ⊕ α∈Φ g α for some finite set Φ of nonzero elements of the dual of the Lie algebra h of H (the roots of G relative to H), such that H acts on g α via the torus character e α . We assume that G is actually defined over Q, so that it makes sense to talk of rational points in g, and that the above splitting is also defined over Q. Then one can choose Γ to be the lattice spanned by Φ in h * . To construct stability data, we consider a such that the elements a(γ), γ ∈ Γ are zero except possibly when γ ∈ Φ is one of the roots α, in which case a(α) is a rational point in g α . Let Hom o (Γ, C) denote the locus of Z ∈ Hom(Γ, C) such that Z(a(α)) = 0 for all roots α. Since there are only finitely many roots, if Z ∈ Hom o (Γ, C) then (Z, a) satisfies the support property and gives a point in Stab(g). The special case when G = GL(n, C) with its standard maximal torus H of diagonal matrices is discussed in [KS,Section 2.9]. Identifying h * ∼ = C n with canonical basis e i , the lattice Γ is the sublattice of Z n spanned by the roots γ ij = e i − e j for i = j, Letting E ij denote the usual basis of g = gl(n, Q) given by elementary matrices, we have g γ ij = E ij . Since g = g 0 i,j g γ ij , a rational degree-preserving map a : Γ \ {0} → g is the same as a matrix a ij with rational entries such that a(γ ij ) = a ij E ij . We can also identify Hom(Γ, C) with C n / (1, · · · , 1) , and so identify Hom o (Γ, C) with the set of elements represented by [(z 1 , · · · , z n )] ∈ C n / (1, · · · , 1) such that z i = z j for i = j. Then a point of Stab(gl(n, Q)) is given by a pair (Z, a) with Z ∈ Hom o (Γ, C) and a matrix a ij ∈ gl(n, Q). We have Z(γ ij ) = z i −z j . Symmetric stability data in the sense of [KS,Definition 2] are such that a is preserved by the Cartan involution acting on Γ. In the present case this happens precisely when the matrix a ij is skew-symmetric. 3.2. Wall-crossing for algebraic groups. We examine now formula (2.3) and the topology on Stab(gl(n, Q)). For fixed stability data (Z, a) = ([z k ], a ij ) we have The set of Z for which all rays are distinct is given by such that for all distinct triples (i, j, k) the complex numbers z i , z j , z k do not lie on the same real line. Formula (2.3) forces the matrix a ij to jump along the locus (the walls) Hom o (Γ, C) \ Hom oo (Γ, C). To illustrate this consider a continuous family of stability data (Z t , a t ) on gl(n, Q) parameterised by [0, 1] and such that only the component z j (t) of Z t is nonconstant. We assume that z j (t 0 ) belongs to the line through z i and z k for some distinct triple (i, j, k). Then the rays ±ℓ γ ij (Z t 0 ), ±ℓ γ ik (Z t 0 ) and ±ℓ γ jk (Z t 0 ) coincide, while we assume that all other rays ℓ γpq (Z t ) are distinct for all times. We can pick indices so that ℓ γ ij (Z t ), ℓ γ ik (Z t ), and ℓ γ jk (Z t ) all lie in a strictly convex sector; in fact for t sufficiently close to t 0 we can pick a very narrow sector which contains no other rays. Then the matrices a pq and a ′ pq on the two sides of the wall are related by while all other commutators vanish, by the Baker-Campbell-Hausdorff formula for gl(n, C) we have Since E pq gives a basis for gl(n, C), we must have This is the only jump in the matrix a ij . 3.3. Kontsevich-Soibelman's Poisson Lie algebra. The graded Lie algebra which is most relevant for the present paper is not gl(n, Q), but an infinitedimensional one introduced by Kontsevich and Soibelman in [KS,Section 2.5]. It may be thought of as the Poisson algebra of functions on an complex affine algebraic torus. Let Γ denote a lattice of finite rank n endowed with an integral, skew-symmetric bilinear form −, − . In the rest of the paper we write g for the infinite-dimensional complex Lie algebra generated by symbols e γ for γ ∈ Γ, with bracket [e γ , e η ] = γ, η e γ+η . We can also define a commutative product * on g simply by e γ * e η = e γ+η . (3. 2) The product * turns g into a Poisson algebra, i.e. the Lie bracket acts as a derivation. These two versions are isomorphic (non-canonically). Also notice that g is really defined over Z, so in particular it makes sense to speak of integral and rational elements of g. To make sense of various objects (such as the group elements S ℓ in (2.2)), we need to work with an amenable subalgebra g 0 of g. The algebra g 0 is the analogue in our setup of the bialgebra in [BT1,4.3]. We briefly recall now its construction following [BT1,4.3]. Once and for all, we fix a strict convex cone with vertex at the origin Γ 0 ⊂ Γ, that is, a non-empty subset closed under addition and multiplication by positive integer numbers which does not contain a straight line. Let g 0 ⊂ g be the Poisson Lie subalgebra generated by the elements: g 0 = e γ : γ ∈ Γ 0 ⊂ g and note that g 0 is graded by the semi-group Γ 0 and has finite-dimensional homogeneous components g k . 3.4. The completed algebra g. We now construct the completion of g 0 . For each k 1, we denote by Γ >k ⊂ Γ 0 the cone generated by elements γ 1 + . . . + γ m for m > k and γ j = 0 for all j = 1, . . . , m. The subspace g >k ⊂ g 0 induced by Γ >k is an ideal. Consider the finite-dimensional nilpotent Lie algebra g k = g 0 /g >k . and the corresponding inverse system . . . → g k → . . . → g 0 = C. By definition, the pro-nilpotent graded Lie algebra given by the completion of g 0 is the limit To simplify the notation we will often write g in place of g 0 . Similarly, we define a pro-nilpotent Lie group G with Lie algebra g by where G k = exp(g k ) is the nilpotent Lie group with Lie algebra g k . Note that g inherits a natural structure of Poisson Lie algebra. Setting the standard power series for the logarithm and the exponential functions combined with the commutative product * yield well-defined maps exp * : g + → g × , log * : g × → g + given by which are each other's inverse. Throughout the paper we will write Aut * ( g) for the group of automorphisms of g as a commutative algebra (we do not require that these preserve the Lie bracket). We will often forget the notation * in (3.2) and simply write e γ e η for the product, but at some points it will be important to have a special symbol for it to avoid confusion (especially to distinguish between the commutative exponential exp * and the Lie algebra exponential exp). When considering g-valued holomorphic functions, meromorphic connections, Stokes data, etc. we need to be careful, since g is infinite-dimensional. For example, by a g-valued connection ∇ on P 1 we mean an inverse system of connections ∇ k on g k , and a flat section of ∇ is an inverse system of ∇ k -flat sections. This is the notion used to define the Stokes data (2.2). See [BT2] for a general treatment. 3.5. Positive stability data and Stokes factors. For most of the time, we will be interested in stability data (Z, a) on g (in particular, the a(γ) are rational points). A natural compatibility condition of the stability data with the cone Γ 0 ⊂ Γ is given by the following definition, that plays a central role in this paper. Note that for positive stability data all the rays ℓ γ,Z = −R >0 Z(γ) for a(γ) = 0 are contained in a half-space H ′ ⊂ C. In the Lie algebra g it is standard to rewrite (2.3) in a different way using the Poisson structure. Firstly notice that for any γ ∈ Γ we may rewrite by Möbius inversion So for γ ∈ Γ prim , that is, for primitive γ Summing over all k ≥ 1 and using standard dilogarithm notation we find Li 2 (e pγ ). Since g is Poisson, for all γ ∈ Γ 0 (not necessarily primitive) [Li 2 (e α ), −] acts as a commutative algebra derivation on the completion g. Therefore exp (−[Li 2 (e γ ), −]) (the exponential of a derivation) acts as an algebra automorphism T γ of g, preserving the Lie bracket (a Poisson automorphism). It turns out that this action is especially nice: (see [FS] for an explicit computation). Recall that we denote by ℓ γ (Z) the ray −R >0 Z(γ) ⊂ C * , for any given γ ∈ Γ and Z ∈ Hom(Γ, C). Note that these conditions define open dense subsets of Stab(g). The locus where the strongly generic condition does not hold corresponds to the so called walls of marginal stability. We show now how to represent the Stokes factors using the first condition. Given generic stability data (Z, a) and a ray ℓ ⊂ C * , the image of the group element S ℓ in (2.2) via the adjoint representation admits the following expression (3.3) Here of course we write T Ω γ for the automorphism exp(−Ω[Li 2 (e γ ), −]). Note that we do not need to specify an order for the previous product as the genericity condition implies that all the T pγ with Z(γ) ∈ −ℓ commute. Furthermore, if in addition (Z, a) is positive, then S ℓ = 1 unless −ℓ ⊂ Z(Γ 0 ). Finally, we can write (2.3) as an equivalent identity of Poisson automorphisms of g, For this formula we assume that there is a single t 0 ∈ [0, 1] for which the stability data is non-generic. Example. A special case of (3.4) which is similar to (3.1) appears in the case of the sublattice Γ 0 generated by elements γ, η with γ, η = 1. Then, (3.4) is equivalent to the "pentagon identity" In the rest of the paper, we will assume that we have a continuous family of stability data (Z, a(Z)) on g, parametrised by some open set U ⊂ Hom(Γ, C), which satisfy the positivity property. For a generic point Z ∈ U , all the rays ℓ γ (Z) with a(γ) = 0 are distinct, so in particular (Z, a(Z)) is strongly generic in the sense of Definition 3.3. Remark 3.4. In the mathematical physics literature, it is standard to require that stability data is symmetric in the sense of [KS,Definition 2]. For g, this is given by the condition a(γ) = a(−γ). We will need to come back to this at several points in our discussion. The connections ∇(Z) from stability data In this section we show that any positive stability data on g defines Stokes factors for an irregular (framed) connection on P 1 of the form (1.1). Given a continuous family of stability data parameterised by an open set U ⊂ Hom(Γ, C), the connection varies isomonodromically with Z ∈ U . 4.1. Irregular connections on P 1 . Let P be the holomorphically trivial, principal Aut * ( g)-bundle on P 1 . By this we mean the inverse limit of the system of holomorphically trivial principal bundles corresponding to the groups Aut * (g k ). By a D * ( g)-valued meromorphic function A in C, we mean an inverse system of meromorphic functions A k : C → D * (g k ). For a choice of local coordinate z in P 1 , we will consider meromorphic connections on P of the form ∇ = d − Adz, given by the inverse limit of a system of meromorphic connections (4.1) Given a local gauge transformation Y : U → Aut * (g), that is, an inverse system of holomorphic maps Y k : U → Aut * (g k ) on an open U ⊂ C, we use the standard notation In the rest of the paper, we focus on connections ∇ with a second order pole at z = 0 and simple dependence on z, of the form where A (j) ∈ D * ( g) are constant in z. We choose the formal type at the origin to be with Z ∈ Hom(Γ, C) (regarded as a derivation). It will be convenient to work with the following notion (see e.g. [B1]). Definition 4.1. A (compatibly) framed connection is a pair (∇, g) given by a connection ∇ as above and an element g ∈ Aut * ( g) such that g −1 · A (−1) = −Z. We introduce now Stokes data for a framed connection (∇, g) following [B1,BT1]. With our choice of formal type (4.3), the Stokes rays for ∇ k are of the form −R >0 Z(α) for α a root of g k relative to Z. We define a Stokes ray of ∇ to be of the form ℓ γ = −R >0 Z(γ) for γ ∈ Γ\{0}, and say that a ray is admissible if is not a Stokes ray. Note that the set of Stokes rays need not be finite. By definition, an admissible ray for ∇ is admissible for each ∇ k . By a fundamental solution X : U → Aut * ( g) for ∇ on an open U ⊂ C, we mean an inverse system of holomorphic flat sections X k : U → Aut * (g k ) for ∇ k , that is, satisfying ∂ z X k = A k X k . The previous formula should be understood as acting on an arbitrary element of g k , where we use standard notation for the composition of maps. Note that a fundamental solution provides a local description of the connection (4.4) Given an admissible ray ℓ ⊂ C * for ∇, define H ℓ ⊂ C to be the open half-plane containing ℓ and with boundary perpendicular to ℓ. Then, there exists a unique fundamental solution X ℓ : H ℓ → Aut * ( g) with prescribed assymptotics X ℓ e −Z/z → g as z → 0 in H ℓ (cf. [BT1,Th. 6.2]). This follows from an analogue result for the corresponding system of connections ∇ k (see [B1,Th. 3.1 & Lem. 3.3]). The solution X ℓ is called the canonical fundamental solution of (∇, g) corresponding to the admissible ray ℓ. Following [BT1], we can use this result to define Stokes factors for (∇, g), which requires a little care since the set of Stokes rays may not be discrete. If ℓ 1 = −ℓ 2 are two admissible rays, ordered so that the closed sector Σ ⊂ C * swept by clockwise rotation from ℓ 1 to ℓ 2 is convex, there is a unique element S ℓ 1 ,ℓ 2 ∈ Aut * ( g) such that X ℓ 2 = X ℓ 2 S ℓ 1 ,ℓ 2 . Definition 4.2. (∇, g) admits a Stokes factor S ℓ ∈ Aut * ( g) along the Stokes ray ℓ if the elements S ℓ 1 ,ℓ 2 tend to S ℓ as the admissible rays ℓ 1 , ℓ 2 tend to ℓ in such a way that ℓ is always contained in the corresponding closed sector. Remark 4.3. Similarly as in [BT1,Prop. 6.3], one can verify that ∇ admits a Stokes factor along any Stokes ray. Our goal in this section is to construct an isomonodromic family of connections as above with prescribed Stokes factors. We start with a continuous family of positive elements (Z, a(Z)) of Stab(g) parametrised by an open set U ⊂ Hom(Γ, C). In particular all the rays ℓ γ,Z = −R >0 Z(γ) for a(γ) = 0 are contained in a half-space H ′ . We will prove the following result. Proposition 4.4. Let Z ∈ U correspond to generic stability data. Then, there exists a meromorphic framed connection (∇(Z), g(Z)) on P 1 , of the form such that (∇(Z), g(Z)) has Stokes data given by the rays ℓ γ,Z and factors (3.3). It extends to an isomonodromic family of framed connections on all of U . The proof consists of several steps and will be given in sections 4.2 -4.9 below. We will also show that A (−1) and A (1) are related by a suitable symmetry, so that ∇(Z) only depends on a pair of D * ( g)-valued functions. The result can be extended to the case symmetric stability data working with the Lie algebra g [[t]]. In this case, A (−1) and A (1) are related by an involution. 4.2. Riemann-Hilbert factorisation problem. Following ideas of [GMN1], the construction of (∇(Z), g(Z)) from generic (Z, a(Z)) can be turned into a Riemann-Hilbert factorisation problem (RH), that is, the construction of a sectionally holomorphic function with jumps (3.3) across ℓ γ,Z , γ ∈ Γ. More precisely, we seek a family X(Z) with the following properties. (2) X(Z)(e α ) extends to a holomorphic function in a neighborhood of ℓ α,Z . (3) Fix a ray ℓ ⊂ H ′ . For every z 0 ∈ ℓ and α ∈ Γ, denote by X(z + 0 )(e α ) the limit of X(z; Z)(e α ) as z → z 0 in the counterclockwise direction. Similarly let X(z − 0 )(e α ) denote the limit in the clockwise direction. Both limits exist, and they are related by (4) There exists g(Z) ∈ Aut * ( g) such that lim z→0 X ℓ e −Z/z = g(Z) along directions non tangential to Stokes rays. Remark. Condition (1) means that each element of the inverse system X k (Z) should be holomorphic in the complement of the finite set of rays ℓ γ,Z for which the class of a(γ) in g k is nonzero. Similar clarifications apply to conditions (2) and (3). Given a solution of RH, we can construct the framed connection in the obvious way: ∇(Z) is given by formula (4.4) and g(Z) by condition (4). Note that the jump (3.3) across a Stokes ray ℓ γ,Z is independent of z, thus the local expression (4.4) patches over a collection of sectors between Stokes rays to all C * . Therefore, it defines a meromorphic connection on P 1 with (possibly) poles at z = 0, ∞. Provided that the restriction of X(Z) to sectors between Stokes rays admits a suitable analytic continuation, continuity of the family of stability data will assure that (∇(Z), g(Z)) is isomonodromic. In what follows, we will construct explicit solutions X(Z) of RH and prove that the corresponding framed connections fulfill the requirements of Proposition (4.4). 4.3. Integral operator. The basic technique to solve Riemann-Hilbert factorization problems consists of finding fixed points for singular integral operators, involving integration along the jump contour (see e.g. [FIKN]). The general form of the integral operators which are relevant for RH formulated above is that of a Z = Z(Z) acting on suitable End( g)-valued holomorphic functions Y by (cf. [GMN1,Eq. 5.11 , (4.5) summing over all rays ℓ = ℓ γ,Z with γ ∈ Γ >0 . Here L is a holomorphic function with values in Hom(Γ, C) and ρ(z, z ′ ) is a suitable Cauchy-type integration kernel. We take (4.5) as a formal expression for a moment. Recall that the stability data (Z, a(Z)) is equivalent to a family (Z, {Ω(γ, Z)}) and note that This leads to the equivalent expression so, when it is well defined, Z[Y ] automatically preserves the commutative product. Following [GMN1], we will actually choose Remark 4.5. This choice is crucial for the solution of RH to provide a connection of the form (4.2). To find fixed points for the operator Z, we wish to iterate in the integral equation (4.6). Initial points foir this iteration are provided by the following definition. Definition 4.6. A holomorphic function Y : U → Aut * ( g) on an open set U ⊂ C * is admissible if the following conditions hold. (1) The limit lim z→0,∞ Y e −L (z)(e α ) exists in g for all α ∈ Γ 0 , in any direction non-tangential to ℓ γ,Z with a(γ) = 0. (2) If ℓ γ,Z belongs to the boundary of U , then Y (e γ ) extends to a g-valued holomorphic function in a neighborhood of ℓ γ,Z with the same property. Trivially, the holomorphic function X 0 = e L : C → Aut * ( g) is admissible. This will be our choice of initial point for the iteration. To run the iterative process we need the following. 4.4. Basic estimates. The proof follows from elementary estimates that we now establish. For later applications, we introduce a parameter R > 0. The estimates needed for the proof of Lemma 4.7 are obtained setting R = 1. We need to study the integral where c ∈ C * , ψ is a sufficiently small angle, and z / ∈ R <0 e iψ c. We also look at the integral along an arc, c and ε is small enough. We claim that (4.7) converges for sufficiently small |ψ|, and in fact its modulus is bounded above for R large enough (for a constant C depending on the angular distance of z from R <0 e iψ c). This follows from the change of variable z ′ = −e s+iψ c for real s and ψ, reducing (4.7) to The modulus of this is bounded above by where C is a constant depending on the angular distance of z from R <0 e iψ c. For ψ sufficiently small this is in turn bounded by where C is a new (possibly larger) constant, depending also on ψ. We recognize the integral as a Bessel function, and we find that (4.7) converges for |ψ| ≪ 1, and moreover it is actually bounded above by C 2R|c| exp(−2R|c|) for R large enough, as required. Similarly with the same change of variable (4.8) becomes One can check that this vanishes for s → ±∞ (for fixed, sufficiently small ε). Proof of Lemma 4.7. First we show that for admissible Y , α ∈ Γ 0 , z ∈ U the right hand side of (4.6) is a well defined element of g. Since z lies away from the rays of integration, the function (z ′ ) −1 ρ(z, z ′ ) is holomorphic on each ℓ γ . For fixed k > 0, the projection of Z[Y ](e α ) on g k involves finitely many γ. We claim that all the integrals appearing are convergent. To see this, we first expand log * (1−Y (e γ )) and use the boundary conditions in Definition 4.6 to see that each integral is dominated by the sum of a fixed finite number of integrals of the form for some constant C. By the basic estimates for (4.7), these are all convergent. Then it follows from standard theory that Z[Y ](e α ) is a holomorphic function of z ∈ U (since (z ′ ) −1 ρ(z, z ′ ) is, and by the above convergence). On the other hand it is also holomorphic in a neighborhood of ℓ α,Z since by the genericity property for Z the integral along ℓ γ,Z = ℓ α,Z appears with a factor of γ, α and therefore vanishes. Finally we can use the basic estimate for (4.7) to take the limit in a direction non-tangential to ℓ γ,Z with a(γ) = 0: by the definition of L this is the constant element of g given by The same argument applies to the z → ∞ limit, which completes the proof. 4.5. Application of Plemelj's theorem. The link between the integral operator Z and the Riemann-Hilbert problem follows from standard theory, and relies on a result from elementary complex analysis (Plemelj's theorem). Suppose Y (z) is an admissible function and (Z, a(Z)) is generic. Fix a ray ℓ in our half-space and a point z 0 ∈ ℓ, and denote by Lemma 4.8. Both limits exist, and they are related by We may apply Plemelj's theorem to find where pv denotes a (well defined, convergent) principal value integral. Therefore where the last principal value integral is convergent. Equation (4.9) follows. 4.6. Fixed point. Following [GMN1, App. C], we construct now a solution of the singular integral equation (4.10) by iteration from X 0 = e L , and show that it solves the Riemann-Hilbert factorization problem. Recall that L(z) = z −1 Z + zZ and hence the solution will depend (in a complicated way) on the parameter Z. Set Z (i) = Z • · · · • Z (i times) and consider the sequence for i ≥ 0. We claim that X (i) (z) converges as i → ∞. This follows from an explicit calculation. To calculate X (i) we start by rewriting, for all admissible Y , Remark. The notation DT reflects the way in which the "BPS state counts" Ω are related to Donaldson-Thomas invariants; in the present case it is of course purely formal. So we can rewrite the action of Z as It follows that (4.11) where we sum over ordered partitions k, and we take the product over all unordered collections {γ 1 , . . . , γ l } ⊂ Γ for l the length of k. Let us denote by T a connected rooted tree, decorated by elements γ ∈ Γ (i.e. there is a map from the vertex set T 0 to Γ). Similarly, we write T 1 for the set of edges of T , and we denote by α(v) the decoration at v ∈ T 0 . Introduce a factor where γ T denotes the label of the root of T , and Aut(T ) is the automorphism group of T as a decorated, rooted tree. To each T we also attach a "propagator" G T which is a g-valued holomorphic function in the complement of ℓ γ rays with a(γ) = 0, defined inductively by where {T ′ } denotes the set of (connected, rooted, decorated) trees obtained by removing the root of T (setting G ∅ (z) = 1). By applying (4.11) i − 1 times we obtain where we sum over all collections of trees {T 1 , . . . , T l } as above, with depth at most i. (Notice that W T j G T j (z) ∈ Γ ⊗ g, and we have extended −, − by glinearity). By the definition of Aut * ( g)-valued holomorphic functions, we see that the sequence X (i) converges for i → ∞ to a limit X = X(Z), which is a solution of (4.10), given explicitly by where we sum over arbitrary (decorated, rooted) trees. This is the analogue of [GMN1] equation (C.26) (see also [N] equation (4.12)). Our discussion so far can be summarised in the following result. Lemma 4.9. The fixed point X(Z) defined by (4.13) solves RH. The automorphism g(Z) = lim z→0 Xe −Z/z of condition (4) is given by where G 0 T defined by Picking k ≥ 0 and projecting into Aut * (g k ), the proof follows from the previous lemmas and the basic estimates for (4.7). The existence of the limit (4.14) follows from our choice of integration kernel: Remark 4.10. We observe that it is possible to allow symmetric stability data (i.e. to allow Ω(γ) = Ω(−γ)) in the construction above by working over the Poisson and we still find a fixed point given by This solves a Riemann-Hilbert problem which is formally identical to the one we described, but with monodromy (Poisson) automorphisms which are compositions In fact we can perform an identical construction over a local complete or Artin ring, for arbitrary Ω (symmetric or not). 4.7. Definition of the framed connection. We define (∇(Z), g(Z)) setting g(Z) as in (4.14) and where X = X(Z) is an in Lemma 4.9. As mentioned earlier, this defines a meromorphic connection on P 1 with (possibly) poles at z = 0, ∞. We show now that it fulfills the requirements of Proposition (4.4). We first prove that ∇(Z) has a double pole at zero and infinity, and therefore is of the form (4.2). Consider the Aut * ( g)-valued map given explicitly by In each finite-dimensional quotient g k , it makes sense to consider a sector Σ between consecutive rays ℓ γ with a(γ) = 0. So in a fixed g k , and inside Σ, we find that Y k is holomorphic, with well-defined limits as z → 0, ∞. Taking now which are elements of Aut * ( g). The existence of these limits follows from the basic estimates for (4.7) and our choice of integration kernel: Consider the meromorphic connection ∇ 0 (Z) on P 1 given by and note that X 0 provides a fundamental solution. Thus, we find an alternative description of ∇(Z) as a gauge transformation of ∇ 0 inside the sector Σ, that is, From the existence of the limits (4.17) we deduce that ) is a compatibly framed connection with order two poles at 0 and ∞, and no other singularities. Remark 4.11. The automorphism g(Z) lies in the subset of Aut * ( g) given by elements of the form e α → e α exp * ( α, x ) for some x ∈ Γ ⊗ g. There is an obvious involution (−) * on this set, induced simply by x → −x. Notice that we have g(Z) = (Y ∞ ) * , that jointly with Y ∞ ·A (1) =Z shows that A (−1) uniquely determines A (1) . In the case of symmetric stability data one can see that the corresponding derivations are related by an involution. 4.8. Formal type and Stokes data. If we expand the kernel ρ(z, z ′ ) as a formal power series in z around z = 0, we can regard Y as a formal gauge transformation (an element of Aut * ( g[[z]])), taking the germ of ∇(Z) at 0 to the germ of ∇ 0 . In other words, the formal type of ∇(Z) at 0 is the type of ∇ 0 . The gauge transformation h(z) acting by is well defined near z = 0 (it has an essential singularity at ∞), and it takes the formal type of ∇ 0 at 0 to d + Z z 2 dz. This proves that the ∇(Z) has the desired formal type, with Stokes rays given by ℓ γ (Z) = −R >0 Z(γ) for γ ∈ Γ. For each finite dimensional quotient g k , the restriction X k to a sector Σ as above is a fundamental solution of ∇(Z) k with the right asymptotics as z → 0. These solutions differ by the action of (3.3) along a Stokes ray ℓ. To prove that these automorphisms are actually the Stokes factors of ∇(Z) it is enough to show that a solution given by X k | Σ induces, by analytic continuation, a fundamental solution on a supersector Σ preserving the asymptotics. We will perform a formally identical check later in Section 6.7, so we do not reproduce the argument here. Similarly, we can calculate the Stokes data at ∞. Setting w = 1 z and arguing as before, we see that the formal type of ∇ at ∞ is d +Z w 2 dw, with Stokes rays given by ℓ γ,Z = −R >0Z (γ) for γ ∈ Γ. We claim that the attached Stokes factors are given again by (3.3). This follows again from the argument in 4.8, applied toX(w; Z) = X(w −1 ; Z). We havē Making the change of variable z ′ = 1 w ′ we can rewrite By induction on |T 0 |, this proves that the jump ofX(w, Z) across ℓ γ,Z equals the jump of X(z; Z) across ℓ γ,Z . 4.9. Extension and isomonodromy of ∇(Z). So far we have constructed ∇(Z) under the assumption that (Z, a(Z)) is generic. We wish to allow for a pair of Stokes rays ℓ γ,Z and ℓ η,Z to come together without the assumption γ, η = 0. Fix a strictly convex sector V ⊂ H ′ . Suppose that (Z(t), a(t)) is a continuous family of stability data on g parametrised by [0, 1], such that V contains only two Stokes rays ℓ, ℓ ′ , whose Z t counterclockwise order is ℓ, ℓ ′ for t < t 0 , respectively ℓ ′ , ℓ for t > t 0 . Thus the rays ℓ, ℓ ′ coincide only when t = t 0 . We assume that all other Stokes rays are constant. Le us also write Ω ± for the obvious limits. From its construction, we see that X(t) = X(Z(t)) has finite limits when t → t ± 0 . These limits X(t ± 0 ) are sectionally holomorphic with values in Aut * ( g). Their jumps across all rays ℓ γ,Z(t 0 ) distinct from ℓ, ℓ ′ are the same. The jump of X(t − 0 ) across ℓ = ℓ ′ is given by Similarly the jump of X(t + 0 ) across the same rays is given by As the family (Z(t), a(t)) is continuous by assumption, we have and hence X −1 (t − 0 ) = X(t + 0 ) (this is clear working in any finite dimensional quotient g k ). From their construction using X(Z), we conclude that the connections ∇(Z(t)) are meromorphic and isomonodromic for t ∈ [0, 1]. By applying this argument repeatedly, we can extend ∇(Z) isomonodromically to all of U . 4.10. Explicit formula for ∇(Z). We provide now a more explicit formula for the D * ( g)-valued function that defines the connection ∇(Z). Given an automorphism T ∈ Aut * ( g), it acts on D * ( g) via the adjoint representation, inducing a map Unlike the g-module structure on D * ( g), we stress that this map is not g-linear (it is a formal analogue of pull-back of vector fields by diffeomorphisms on the formal torus Spec g). For T = X −1 , we can now write We pick a basis {γ j } for the lattice Γ, with dual basis ∂ j . Then, we can express in this basis and construct a matrix of "partial derivatives" of X −1 is not the inverse of [∂X] with respect to the commutative product, precisely due to the failure of g-linearity of ∂X). Using this, a direct calculation shows that This formula provides a rigorous analogue of [GMN1,eq. (5.18)]. In [St,Section 2.8], (4.18) is combined with the general asymptotic expansion for X in order to write down an expansion for the A connection. For this, we recall that X = Y X 0 and use this to express where Y B denotes the g-valued matrix given by the action of Y on the components of B, defined by (4.19) 4.11. Single-ray solution. The following basic example will make contact with differential geometry through the tt * -type connections ∇ GM N (Z) of [GMN1]. This is the case when Γ ∼ = Z 2 generated by γ, η with γ, η = 1, and the stability data are such that Ω(γ) = 1 with all other Ω vanishing. This computation will play an important role in Sections 6 and 8. Using (4.19), one can derive explicit expressions for the connections ∇(Z) as follows We provide a different argument using the solution of the integral equation. Indeed, it is straightforward to see that Z[X] = X reduces to (which is just (4.13) in this case). On the other hand which we rewrite as Integrating by parts, using the skew-symmetry of the kernel ρ(z, z ′ ), we find and so Expanding in z, we get from which the claimed formula for ∇(Z) follows readily. Remark 4.12. Notice that in this special case it is not harder to allow symmetric stability data with Ω(±γ) = 1 and all other Ω vanishing, without the need to pass to g [[t]]. The results are formally the same, with the only difference of summing over all nonzero k in the formulae for A (i) . We will denote the resulting connections by ∇ sym (Z). 4.12. Relation to heat kernel. We close this Section taking a brief look at the iterative solution of (4.10) from a slightly different point of view (this subsection can be safely skipped). We introduce the scaling Z → RZ, which will play a crucial role in the rest of the paper. Denote by K m,l (|x − y|) the usual Euclidean heat kernel in dimension 2, with mass m, The reason for writing |m| is that we want to allow the "mass" to be a complex number. Let us also introduce the rational function which will serve as an integration kernel. Then the simplest integrals that appear in the solution X(z) (attached to graphs which consist of a single vertex) are very similar to the Euclidean propagator in two dimensions in its so-called parametric representation, but with an extra factor −iρ inserted, To describe further contributions to X(z) one considers graphs T which are rooted trees decorated by α i ∈ Γ, as in the example of Figure 1. To each internal edge α i → α j is attached the usual Euclidean propagator, deformed by an extra factor ρ Zα j ,s j (s i ), & & GFED @ABC α 5 Figure 1. The root of T is responsible for the z-dependence, by contributing a factor Thus the element of g attached to a diagram T with n vertices is (4.20) To make some contact with more familiar notions, notice that if we suppose all the "masses" m = |Z α i | coincide and we forget all the ρ insertions, then setting R = |x − y|/2 we recover the integrand for a Feynman diagram in 2-dimensional φ n theory with n edges, displayed in Figure 2. As usual, each diagram T also I 0,n C(x,y) C(x,y) . . . comes with a corresponding rational weight W T , which in the present case takes values in Γ ⊗ Q and is given by Notice that this is the only place where the locally constant function Ω actually shows up in the solution. The connections ∇ BT L (Z) from stability data In this section we recall the construction of the Bridgeland-Toledano-Laredo (BTL) connections, for our choice of stability data in the Poisson Lie algebra g, and study the relation with [BT1]. Then, for the category of representations of a quiver without oriented cycles, we construct a "motivic" isomonodromic family of irregular connections, which recovers the connections in [BT1] using natural Lie algebra morphisms [J2]. For the case of Dynkin quivers, we are also able to recover (2.4) following ideas in [R1]. We use the foundational work of Joyce [J2, J3, J4]. 5.1. Irregular B-connections on P 1 . Let Hom(Γ, C * ) the group of characters of Γ, acting on g by for any ψ ∈ Hom(Γ, C * ). This action preserves the Poisson Lie algebra structure and induces an action on the pro-nilpotent Lie group G. We define the prosolvable, pro-algebraic group B with maximal torus Hom(Γ, C * ) as the semi-direct product The Lie algebra of B is given by the extension b = Hom(Γ, C) ⋉ g of g by the abelian Lie algebra Hom(Γ, C), with bracket [Z, e γ ] = Z(γ)e γ . Let P B be the holomorphically trivial, principal B-bundle on P 1 . By this we mean the inverse limit of the system of holomorphically trivial principal bundles corresponding to the groups Hom(Γ, C * ) ⋉ G k . For Z ∈ Hom(Γ, C) and f ∈ g, consider connections of the form with a second order pole at t = 0 and a logarithmic pole at t = ∞. Any such connection is the inverse limit of a system of connections where f k denotes the projection of f in g k . We note that any connection of the form (5.1) induces a D * ( g)-valued connections in P 1 using the adjoint representation. We will go back to this when we relate the BTL isomonodromic family to the original construction in [BT1], and also to our family of connections ∇(Z) in Section 6. 5.2. The Bridgeland-Toledano-Laredo Theorem. The methods of [BT1] apply in our context leading to an existence result for connections of the form (5.1) with prescribed Stokes factors. Let (Z ′ , a(Z ′ )) be a continuous family in Stab(g) parametrised by an open set U ⊂ Hom(Γ, C). We denote by U reg ⊂ U the open subset given by elements Z ′ corresponding to strongly generic stability data (see Definition 3.3). Without loss of generality, we assume that the image of Γ 0 by any Z ′ ∈ U lies in the lower-half plane −H. Theorem 5.1 ( [BT1]). For any Z ′ ∈ U reg , setting Z = −Z ′ , there exists a unique connection with Stokes factors S ℓ , given by (2.2), along the set of Stokes rays ℓ γ (Z ′ ) = R >0 Z(γ), with γ ∈ Γ. The residue f (Z) has only positively graded components f (Z) = γ∈Γ 0 f γ , explicitly given by where J n : (C * ) n → C are suitable holomorphic functions with branchcuts and ⊗ denotes the product in the universal enveloping algebra U g. Furthermore, as Z ′ varies in U , the connections ∇ BT L (Z) extend to an isomodromic family of connections with holomorphic dependence on Z. Following ideas of Bulsar-Jurkat-Lutz [BJL], the proof follows from the application of the Fourier-Laplace transform to a connection of the form (5.2), and the study of the analytic continutation of solutions of the corresponding Fuchsian connection. Using the well-known fact that the monodromy of such connections can be expressed in terms of multilogarithms, Tannaka duality leads to a formula for the Stokes map [BT2,Thm. 4.7]. Recall that this map sends the irregular connection (5.1), determined by Z ∈ Hom(Γ, C) and f ∈ g, to its Stokes data. For fixed Z, it can be seen as a map where the Stokes factors are S ℓ = exp( α:Z(α)∈−ℓ ǫ α ), for ǫ = α∈Γ 0 ǫ α . Explicitely, the Stokes map is given (in a suitable open subset of U reg ) by where f = α∈Γ 0 f α . The multilogarihm functions M n : (C * ) n → C are holomorphic and given by iterated integrals (see [BT1,Definition 4.4]). Formula (5.4) follows from a universal formula for the Taylor series of S(Z) −1 around f = 0 [BT2,Th. 4.8], that involves sums of multilogarithms indexed by finite trees. Crucially, by the positivity property of the stability data the sum in the right hand side of (5.4) is finite, and hence the Taylor series of S(Z) −1 yields a global inverse in this case. The functions J n in (5.4) are holomorphic with branchcuts, making the expression J n (Z(α 1 ), · · · , Z(α n )) a well-defined holomorphic function on the complement of a divisor in U reg (see [BT2,Thm. 4.9]). The remarkable point is that the discontinuities of J n precisely balance the especific jumping behaviour of a(Z) in the continuous family of stability data, thus resulting in a continuous, holomorphic function f α (Z) in U . By results of Jimbo-Miwa-Ueno [JMU] and Boalch [B1], the isomonodromy condition for the holomorphic family ∇ BT L can be recast in terms of Joyce's differential equation [J5] Remark 5.2. We should stress that formula (5.5) for the Stokes map would not be valid if the residue f has non-zero component in the centralizer of Z, that is, the condition f ∈ [Z, b] = g (which ensures nilpotency of the residues of the Laplace transformed connection [BT2,Sec. 8.2]) is essential to prove (5.5) and hence also (5.4) and the uniqueness part of Theorem 5.1. In the original approach [BT1], the connections ∇ BT L (Z) take values in the Hall algebra CF(C) of constructible functions of an abelian category C with Grothendieck group Γ. In this context, the right hand side of (5.4) can be identified with the holomorphic generating function for Z-semistables in class α introduced by Joyce [J5]. The next four sections are devoted to explain the relation between the connections ∇ BT L in Theorem 5.1 and the Hall-algebra valued connections of [BT1]. Motivic Hall algebras. We shall restrict ourselves to the case where C is the abelian category of finite-dimensional representations of a quiver Q without oriented cycles. We work over the field of complex numbers C. It should be possible to generalize our construction to more general abelian categories, but for the sake of simplicity we focus on this well-behaved case. Since the category C has finite length and finitely many simple modules S 1 , . . . , S n up to isomorphism (corresponding to the vertices of Q), the Grothendieck group Γ = K(C) is a rank n lattice, with non-negative cone Γ 0 . We denote by Γ >0 = Γ 0 \{0}. Given an integer d 0, there is an affine variety Rep d parameterising Amodule structures on the vector space C d . The moduli stack M d of A-modules of dimension d is the quotient More generally, we can consider the moduli stack of objects in C where M γ denotes the moduli stack of objects of class γ ∈ Γ. Let SF(C) be the motivic Hall algebra of the category C, as defined by Joyce [J2]. Here we follow closely [Br3]. This is an associative, unital algebra over C with underlying vector space the relative Grothendieck group K(St /M). As a vector space, it is generated by classes of morphisms where X is an algebraic stack of finite type over C with affine stabilizers. The associative product * on K(St /M) constructed in [J2] endowes SF(C) with a structure of graded algebra for the semi-group Γ 0 where SF γ (C) is the vector space generated by classes [f : X → M γ ]. We will need later that SF(C) is an algebra over K (St /C) and the explicit description (see [Br3,Lem. 3.9]) There is a canonical graded Lie subalgebra with respect to the commutator bracket [J4,Cor. 5.6] generated as a Lie algebra by sets of special elements (see Remark 5.5) By definition, n(C) is a quotient of the Γ >0 -graded free Lie algebra with symbols e α , α ∈ Γ >0 and hence it is a pro-nilpotent Lie algebra with finite-dimensional homogeneous components (cf. Section 3.4). Let U n(C) be the universal enveloping algebra of n (C). Let n(C) denotes the completion of n(C) with respect to the grading (cf. Section 3.4) and U n(C) the corresponding universal enveloping algebra. Exponentiation in U n(C) leads to a pro-unipotent Lie group with Lie algebra n(C) N ′ ⊂ U n(C). To define the structure group of the (trivial) principal bundle we are interested in, we note that there is a unique surjective algebra morphism (see [J3, p. 66 which extends the identity on n (C). We define the pro-unipotent Lie group N (C) with Lie algebra n(C), to be the exponentation of n (C) in (the completion of) the algebra C (C). As in Section 5.1, one can use the grading of n(C) to form a larger pro-solvable Lie group with pro-solvable Lie algebra b(C) = Hom(Γ, C) ⋉ n(C). Remark 5.3. In the notation of [J3,Def. 8.9], the algebras n(C) and C(C) correspond respectively to L to τ and H to τ . 5.4. Constructible functions and quantum groups. To establish the relation with [BT1], consider the Ringel-Hall algebra of C-valued constructible functions on the moduli stack M where H γ (C) is the subspace of functions supported on modules of class γ. As a vector space, CF(C) is generated by GL d (C)-invariant constructible functions on Rep d for d ≥ 0. There is a surjective morphism [J2,Def. 2.7 & Thm. 5.2] π : SF(C) → CF (C) ( 5.7) and also a natural injection satisfying π • ι = Id, which in general does not preserve the product (see [J2, p. 32]). The morphism π induces surjective algebra morphism from n(C) and C (C) to, respectively, the Lie algebra of constructible functions supported on indecomposables n(C) considered in [BT1,Sec. 4.4] and its universal envelopping algebra C(C) [BT1,Prop. 4.5] (see [J4, p. 21]). Remark 5.4. An important difference between n(C) and n (C) is that the universal envelopping algebra U n(C) is not embedded in SF(C), unlike C(C) ⊂ CF (C). Instead, one has the surjective morphism (5.6). Following [J2,Ex. 4.25 & 5.20], we can make most of the previous costruction very explicit for our choice of abelian category C. As for the construction of the motivic connections, the rest of this section can be safely skipped. Let n + ⊕ h ⊕ n − be the Kac-Moody Lie algebra corresponding to the undirected graph underlying the quiver Q. Then, n(C) is isomorphic to the the positive part n + and C(C) is isomorphic to its universal envelopping algebra U n + . The algebras n(C) and C(C) provide a "quantized'" version of the algebras n + and U n + . To see this, we assume that Q is a Dynkin quiver, so that the Lie algebra n + is finite dimensional. We relate a natural quotient of C(C) with Drinfeld's quantum group [Dr,Ex. 6.2] U q n + . Recall that U q n + is q-deformation of the universal envelopping algebra U n + , and that U n + is recovered in the "semi-classical" limit q → 1. Let C(q 1/2 ) be the algebra of rational functions in q 1/2 with coefficients in C and be the (unique extension of the) virtual Poincaré polynomial [J1,Ex. 4.3] (see also [J2,Th 2.21]). Consider the C(q 1/2 )-module (see [Br3,Rem. 3.11]) SF(C, P, C(q 1/2 )) = SF(C) ⊗ K(St /C) C(q 1/2 ). (5.10) According to [J2,Th. 5.2 & Rem. 6.5], this space can be endowed with an associative product which naturally identifies it with a quotient of SF (C). We denote by C(C, P, C(q 1/2 )) ⊂ SF(C, P, C(q 1/2 )) the subalgebra induced by C (C). Here we note that the algebra C(C) coincides with the composition algebra, that is, the subalgebra of SF(C) generated by the characteristic functions of simple modules ι(κ [S i ] ) (see Remark 5.10). Then, there is an isomorphism U q n + ∼ = C(C, P, C(q 1/2 )) and we obtain that there is a surjective algebra morphism C(C) → U q n + . Remark 5.5. We note that n(C) is defined in [J3,Def. 8.9] as the Lie subalgebra of SF(C) generated by the elements ǫ α , with α ∈ Γ >0 . In [J4,Cor. 5.6] it is proved that it is independent of Z ∈ Stab(C) (under suitable assumptions that hold in our case). Using the methods of [BT1], the next result constructs the motivic Bridgelad-Toledano-Laredo isomonodromic family for the category C. Theorem 5.6. The analogue of Theorem 5.1 holds. It leads to a unique holomorphic, isomonodromic family ∇ C of irregular connections on the holomorphically trivial principal B(C)-bundle on P 1 with Stokes data determined by (5.12). The family of connections in [BT1] is induced from ∇ C using the morphism (5.7). Proof. The proof is in two steps. First, given a ray ℓ ⊂ C * , define an element where the exponential is taken in the universal envelopping algebra U n (C). The methods of [BT1] (summarised in our sketchy proof of Theorem 5.1) imply now the existence of a unique holomorphic family of irregular connections with stokes factors SS ′ ℓ and residue f ′ (Z) ∈ n(C) given by the analogue of (5.4). Note that ∇ ′ is not neccessarily isomonodromic, as the condition that (2.3) is constant may not hold in the group N ′ . The second step deals with the lack of isomodromy of ∇ ′ . We use the surjective morphism (5.6) to induce from ∇ ′ (Z) a family of B(C)-connections with residue f C (Z) ∈ n(C). The fundamental solutions of (5.13) are B(C)-valued, induced from the Hom(Γ, C * ) ⋉ N ′ -valued fundamental solutions of ∇ ′ (Z) using (5.6). This implies that the stokes factors of ∇ C (Z) along the set of Stokes rays ℓ γ (Z), with γ ∈ Γ, are now given by where the exponential is taken in C (C), and hence the family (5.13) is isomonodromic. The claimed relation between ∇ C and the isomonodromic family of connections constructed in [BT1] is straightforward by construction. Remark 5.7. Note that ∇ C (Z) and ∇ ′ (Z) are equal as n(C)-valued 1-forms on P 1 , that is, f C (Z) = f ′ (Z) since (5.6) is the identity restricted to n (C). The effect of applying this morphism if simply to express f ′ (Z) in terms of the product * in C(C) rather than ⊗. Although this may seem confusing at first, the point is that they are different as connections on P 1 , with fundamental solutions taking values in different groups. is the characteristic function of C and [C] ∈ K(St /C). Let g be the Poisson Lie algebra corresponding to (Γ, ·, · ), constructed as in Section 3.3. Let g q ⊂ g(q) be the C[q ±1/2 ]-module generated by theê γ for γ ∈ Γ. Then, g q is a subalgebra of g(q) endowed with a Poisson Lie bracket (cf. [Br3,Sec. 5 (5.15) Informally, we think of g as a degeneration of g q in the limit q → 1 We wish to combine now the morphism (5.14) with the isomorphism (5.15), to induce the connections (5.3) from (5.13). The following statement, which is a consequence of deep results of Joyce, shows that this can be done upon composition of Φ with the adjoint representation. Let Z ∈ Stab(C) be a stability condition and consider ǫ α ∈ n α (C) as in (5.11). The proof follows from [J2,Th. 8.7], that implies (q − 1)Φ(ǫ α ) ∈ g q , combined with (5.15), which gives that g q is abelian modulo (q − 1)g q . To get a feeling, suppose that Z ∈ Stab (C) such that the semi-stable objects of class γ are stable. Then, the moduli space M ss γ of semi-stable representations of class γ is a smooth projective variety and (see [R1]) where now P (M ss γ ) is simply the Poincare polynomial in singular cohomology. Explicit combinatorial formulae for this quantity can be found in [R1, J2]. Choosing γ = [S i ], the class of a simple element, we have ǫ γ = δ γ which clearly satisfies the statement. Consider the push-forward of derivations in the associative algebra (g q , * ) induced by (5.15) sc : D * (g q ) → D * (g) and the partially defined Lie algebra morphism (5.16) By Lemma 5.8, (5.16) is well-defined on elements of the continuous family (5.12) and hence yields a continuous family of stability data on D * (g). Furthermore, as the formula for the residue of the connection (5.13) is a Lie series in the elements ǫ α (see [BT1,Th. 3.7]), the morphism (5.16) is well-defined on f C (Z). This lead us to the following result. Corollary 5.9. The universal connection ∇ C induces a holomorphic, isomonodromic family of D * (g)-valued connections, with prescribed Stokes factors An explicit, combinatorial formula for (5.17) was provided by Reineke [R1, R2]. We note that it is not obvious that the methods of [BT1] apply directly to the induced continuous family on Stab(D * (g)). The problem may come from elements in the stabilty data that commute with Z (cf. Remark 5.2). To analyze a simple case, assume that Q is a Dynkin quiver. Then, we can can choose a strongly generic Z ∈ Stab (C) such that the only stable representations are the simples S i , corresponding to the vertices of Q, and semi-stables correspond to direct sums of a unique stable object [K]. In this case, the non-trivial Stokes factors correspond to ℓ i = R >0 Z([S i ]) and (5.17) reduces to This shows that, at least for Dynkin quivers, the continuous family in Stab(D * (g)) is induced from a continuous family in Stab(g) via the adjoint representation, and hence the connections in Corollary 5.9 are induced from (5.3). Remark 5.10. The previous choice of stability data shows that the algebra C(C) is generated by characteristic functions of simple modules. Remark 5.11. Motivation for considering SF(C) instead of CF (C) in our discussion comes from the fact that the inclusion (5.8) is not a morphism (see [J2, p. 32]). Thus, the integration morphism Φ cannot be applied directly to the CF(C)-valued connection in [BT1] to induce (5.3). 6. ∇ BT L (Z) as a scaling limit of ∇(Z, R) 6.1. In this Section (6.2 -6.7) we prove Theorem 2.1. Consider the rescaled connections The R → 0 limit of ∇ t (Z, R) does not exist in this fixed gauge. This is already apparent in the single-ray solution discussed in 4.11: we have The integrals ℓγ dz ′ z ′ exp(L(z ′ )(kγ)) are real, positive and diverge logarithmically as R → 0. Another example of R → 0 singularity with multiple rays will be discussed in 6.8 at the end of this section. In 6.2 -6.6 below, we construct a sequence of constant gauge transformations g(R) such that lim R→0 g(R) · ∇ t (Z, R) exists and has the form In 6.7 we will compute the Stokes data for ∇ t and deduce its actual equality with ∇ BT L (−Z). 6.2. An auxiliary integral operator. The sequence of gauge transformations g(R) is given explicitly by Our strategy to show that the limit ∇ t exists is to show first that the local flat sections of ∇(Z, R) given by the restriction of X(z; Z, R) to a sector Σ have a finite R → 0 limit after gauging them by g −1 (R). Fixing a sector Σ between consecutive Stokes rays of ∇(Z, R) (in particular, working in a finite-dimensional quotient g k ), we will show that the limit exists. Notice that lim R→0 X 0 (Rt)(e α ) = e α exp * (t −1 Z(α)). As X(Rt) is the composition Y (Rt)X 0 (Rt) in Aut * ( g), it is enough to consider the limit lim To study this we consider the Aut * ( g)-valued function h, holomorphic in Σ, given by h(t; Z, R) = g −1 (R)Y (Rt). Let us show that h(t; Z, R) is a fixed point for an integral operator which is very similar to Z. By definition, for all α ∈ Γ we have where σ(Rt) is a sum of integrals of the form one for each graph T in a class of decorated, rooted trees. Here I(z ′ , R) is some other iterated integral, with values in Γ ⊗ĝ, of the form for some g-valued function r(z ′ , R). By rescaling, we can rewrite each term in (6.1) as Next we compute where σ ′ (Rt) is again a sum of terms labelled by the same diagrams T , of the form Going back to h(t), we have In the last equation we used the algebra automorphism property of Y −1 0 . By the same property, the factor g −1 (R)( α, σ ′ (Rt) ) splits into a sum of terms of the form one for each T . In each finite-dimensional quotient g k , the latter term can be rewritten as 1 Recalling the iterative definition of I(Rz ′ , R), this equals in turn for some "residual" (iterated) integral I ′ (Rz ′ , R). Indeed, the above rewriting can be seen as the operation of removing the root a (labelled by α ′ ) from the fixed tree T corresponding to (6.2), leaving a finite set of disconnected trees T \ {a}. Notice that we allow the empty tree in this residual set, corresponding to a factor 1. Now we let the original T behind (6.2) vary among all trees with root a labelled by α ′ , and sum over all the corresponding integrals, getting for each fixed α ′ , By a standard combinatorial principle, disconnected = exp connected , so for each α ′ the last integral equals The upshot is that we have found an integro-differential equation for h(t), namely We can turn this into the integral equation where the outer integral is computed along some path in the simply connected domain Σ starting from a fixed base point t 0 . 6.3. Fixed point. Of course (6.3) looks quite similar to the integral equation for X(z). The advantage is that now it is easy to compute R → 0 limits. To see this we leave aside h(t) = g −1 (R)Y (Rt) for a moment and consider solutions of (6.3) obtained by iteration. Since we have, for all R > 0, we look at the solutionh(t) obtained by iteration starting from h 0 (t) = I. We will then prove that when we choose t 0 = 0 (by a limiting argument) we have in fact h =h. Remark 6.1. Notice that, starting from h 0 (t) = I, all the integrals So at least the first iteration from h 0 (t) = I has a well defined as R → 0. This does not happen for the first iteration of the integral equation for X starting from X 0 : that is already divergent. As in the case of Z, we have an expression for the iterative solution of (6.3), namely (with the usual notation) (6.4) Let as assume inductively that each H T j (z) is of order o(|z| ε ) as z → ∞ for all ε > 0 (i.e. it grows less than any positive power), and that this holds uniformly in R (this is certainly true for the identity). Similarly let us assume inductively that each H T j (z) is bounded as z → 0. Then one can show that the inner integral is convergent and in fact of order O(|t ′ | −1 )·o(|t ′ | ε ) for all ε > 0 as t ′ → ∞ uniformly as R → 0. At the same time it is uniformly bounded near t ′ = 0 for all R. In particular H T (t) does have a well-defined R → 0 limit, namely just Going back to (6.4), we need to check the inductive hypotheses. But since the inner integral is O(|t ′ | −1 )·o(|t ′ | ε ) uniformly, H T (t) is O(log |t|)·o(|t| ε ) for all ε > 0 as |t| → ∞, which is again of order o(|t| ε ) for all ε > 0. Also, the integrand is uniformly bounded as t ′ → 0 for all R, so the same is true for H T (t) as t → 0. The upshot of this is that the limit ofh(t) as R → 0 exists and is given by (6.5) Example. In the single-ray case of Section 4.11, we get , and one can check that we can apply Fubini to get By a limiting argument, we can take the base point t 0 = 0 and find H kγ (t) = e kγ * 1 2πi ℓγ dz z t z − t exp(kZ(γ)z −1 ). 6.4. Application of Fubini's theorem. The argument above proves in fact that for all R ≥ 0 At the same time, by the definition of ℓ γ T , the integral Since the integration path from t 0 to t is compact for a fixed t, the inner integral is O(|z| −2 ) as z → ∞. Thefore the second integral is also finite for all R ≥ 0. Then for a fixed t we can apply Fubini to rewrite By a limiting argument, we can choose t 0 = 0 (which strictly speaking is not in Σ), and find for the corresponding solutions and (6.7) 6.5. When t 0 = 0 we haveh(t) = h(t) = g −1 (R)Y (Rt). To see this notice that both h(t) andh(t) are solutions to (6.3) (with t 0 = 0). One can check that this implies that h −1 (t)h(t) (the composition in Aut * ( g)) is holomorphic on C * . On the other hand both h(t) andh(t) are bounded as t → 0 and t → ∞. For h(t) this follows from the same property for Y (Rt), while forh(t) it follows from (6.6) above. So h −1 (t)h(t) ∈ Aut * ( g) is a constant, and since lim t→0 h −1 (t)h(t) = I by construction, it must be the identity. It follows in particular that h(t) has a finite limit in Aut * ( g) as R → 0, which we denote by h(t). Notice that X 0 (t) is naturally a flat section for the connection This implies that, in each sector Σ, X(t; Z) is a flat section of the pullback connection ∇| Σ = h −1 (t)| Σ · ∇ 0 . By precisely the same argument as in section 4, the ∇| Σ for various Σ glue to a connection on C * ⊂ P 1 in each finite-dimensional quotient g k , and taking an inverse limit we find a well-defined D * ( g)-valued connection ∇. Notice that ∇ is in fact (a posteriori) the R → 0 limit of the connections g(R) · ∇ t , which have the form exist. In fact, we claim that lim R→0Ã (1) (R) also exists, from which it follows that To prove the claim, we notice that our argument in section 6.2 is symmetric, in that it applies equally well to the different scaling limit z = R −1 t, R → 0, with the same gauge transformations g(R). The rescaled connections ∇ connections take the form Regarding h(t) as a formal gauge transformation (i.e. an element of Aut * ( g[[z]])), we see that the formal equivalence type of ∇(t; Z) is d+ Z t 2 dt. Therefore the Stokes rays of ∇(t; Z) are ℓ α , α ∈ Γ. Thus the Stokes rays are independent of R. Notice also that since X(t) is the R → 0 limit of g −1 (R)X(Rt), we have immediately for a Stokes ray ℓ ⊂ H ′ and a point z 0 ∈ ℓ ). 6.7. Stokes factors at 0. To compute the Stokes factors of ∇ rigorously, we need to understand the analytic continuation of the limit h(t) beyond a sector Σ. By (6.5) this amounts to understanding the continuation of the integrals H T (t). We wish to prove the following: each H T (t) extends to an analytic function in the supersector Σ, which vanishes as t → 0 in Σ. We argue by induction on the length of T . The result certainly holds for H ∅ (t) = 1. Next look at We assume without loss of generality that ℓ α lies in Σ. By induction, we can assume that we have already extended all the H T j (z) to analytic functions on Σ, vanishing as t → 0 in Σ. We write H(z) for the product of these analytic continuations, i.e. on Σ, Write Z = Z(γ T ) for simplicity. Then setting z = −Zs we have Upon the change of variable s → s −1 , we may rewrite this as where the integral is taken along the path t −1 Z + R >0 . By our inductive assumptions on H(z) (in particular since H(−Z(σ − t −1 Z) −1 ) → 0 as σ → ∞ along the path), the integral is a holomorphic function of t ∈ Σ, except for a branch cut discontinuity along the ray t −1 Z ∈ R <0 , that is ℓ α . However we can extend across this branch cut simply by rotating the integration path around its origin: we integrate along the path t −1 Z + e ±iφ R >0 for some φ ∈ (−π/2, π/2) such that the ray lies in Σ (see Figure 3). By induction and since φ ∈ (−π/2, π/2) this analytic continuation has the same asymptotics as the new integration ray, uniformly as t → 0, so it is enough to check that also vanishes as t → 0. But since φ ∈ (−π/2, π/2), the real part of σ is strictly positive as σ → ∞ along t −1 Z + e ±iφ R >0 . Therefore the integral along the path t −1 Z + e ±iφ R >0 behaves like the integral along t −1 Z + R >0 for t → 0, and the latter is vanishing: one way to see this is to go back to its expression as Arguing as above, we can rewrite this as The classical incomplete Gamma special function is defined by It is well known that Γ(a, t) is an analytic function of t ∈ C * , with a branch cut discontinuity along R <0 . Then we have an equality It follows that H α (t) is analytic for all t ∈ C * , with a single branch cut along Z(α)t −1 ∈ R <0 . In this case it is easy to work out the various branches of the function: from the representation we only have to choose the integration path from πZt −1 to ∞ suitably to get a branch of the function which extends across ℓ α (in fact, up to e ±π/2 ℓ α ). Moreover, by the same representation, one can check that all of these branches are vanishing as t → 0. We have proved that ∇(Z) has the same formal type and Stokes factors as ∇ BT L (−Z). It follows from the general theory developed in [B1,BT2] that ∇(Z) and ∇ BT L (−Z) are gauge equivalent. Note that a gauge transformation taking ∇ BT L (−Z) to ∇(Z) must be t-constant, and such constant gauge transformations preserves the off-diagonal property of the residuef described in Remark 5.2. The actual equality follows now from the uniqueness part of the main theorem on [BT2]. 6.8. Higher order divergencies. Finally we briefly discuss an example of the more complicated R → 0 singularities which appear in the general multiple rays case. Using this one can show that for Re(ω) < 0 the leading order term as R → 0 in (6.10) can be written as Standard asymptotics forK 1 now imply that the leading order term is O(log 2 (R)). 7. The R → ∞ limit and tropical geometry 7.1. As we explained in the Introduction (2.11), it is expected that the fourdimensional tt * -connections of mathematical physics ∇ GM N (Z, R) display some form of tropical behaviour as we approach the R → ∞ limit. One may wonder if some shadow of this behaviour may still be present in the toy models ∇(Z, R). In this section we show that this is indeed the case, by proving fully precise statements of Theorems 2.2 and 2.3. The precise version of Theorem 2.2 is given in 7.2 below and applies to the distinguished flat sections given by the restriction of X(z; Z, R). By (4.13), X(z; Z, R) is expressed in terms of (rational) linear combinations of the special functions G T (z; Z, R): these play the same role as multilogarithms in [BT1], [BT2]. The tropical behaviour we describe concerns the functions G T (z; Z, R) in the limit when Z approaches a degenerate linear map and, at the same time, R → ∞. The proof of Theorem 2.2 is carried out in several steps in 7.3 -7.6. We will recall the few (basic) notions from tropical geometry we need in 7.6 -7.7. The precise version of Theorem 2.3 is given in 7.8 and the proof is carried out in 7.9 -7.12. 7.2. General setup. For simplicity, we will only describe the model case when Γ is generated by two elements γ, η with γ, η = κ > 0. We will choose for definiteness a family Z ∈ U parametrised by a connected, open subset U ⊂ Hom(Γ, C) for which Z(γ), Z(η) lie in the positive quadrant. We will write Z ± for a point in the open subset U ± of U where ± Im Z(γ)/Z(η) > 0; we assume that U ± are nonempty. We fix a continuous family of stability data on g characterised by Ω(γ, Z + ) = Ω(η, Z + ) = 1, with all other Ω(α, Z + ) vanish. The locally constant function a : Γ → g underlying the stability data in U + is given simply by a(kγ) = − 1 k 2 γ, a(hη) = − 1 h 2 η for h, k > 0, with all other values vanishing. While this family is very simple, the corresponding irregular connections ∇(Z, R) are already as complicated as in the most general case. Choose a fixed z * ∈ C * with Re z Im z < 0. We consider trees T such that W T (Z + ) = 0, i.e. their vertices are decorated by positive multiples of the basic vectors γ or η. For each tree T with more than a single vertex, the special function G T (z * , Z; R) is sectionally holomorphic in Z ∈ U : it is discontinuous along the critical locus where Im Z(γ)/Z(η) = 0. The idea we wish to implement is very simple: we will rewrite G T (z * ; Z + , R) as a sum of iterated integrals over rays ℓ(Z − ), of the form ±G T ′ (z * ; Z − , R) for various T ′ , with the only difference that the integrands involve X 0 (z; Z + ; R). Since X 0 is continuous in Z, G T (z * ; Z + , R) will be asymptotically equal to the sum of these terms ±G T ′ (z * ; Z − , R) as |Z + − Z − | → 0; and this gives an effective way to see which linear combination of the special functions G T ′ (z * ; Z − , R) replaces G T (z * ; Z + , R) in the expansion of X(z * ; Z − , R). The theorem below makes this idea precise, and characterises the single-vertex term G • (z * ; Z − , R) in this linear combination in terms of certain tropical graphs. Theorem 2.2 (precise statement). There is an expansion where r(|Z + − Z − |) → 0 as |Z + − Z − | → 0, and we sum over a finite set of rooted trees T ′ , not necessarily distinct, decorated by Γ. Let β ∈ Γ denote the sum i α(i) of all decorations of T . The terms corresponding to a single-vertex tree in (7.1) are labelled by a finite set of graphs C i containing |T 0 | external 1-valent vertices and with 3-valent internal vertices. These terms are all equal to G β (z * ; Z − , R) up to sign, and differ by a well defined factor ε(C i ) = ±1 which is uniquely attached to the graph C i . Moreover, the graphs C i come naturally with an extra combinatorial structure, which says precisely that they are the combinatorial types of a finite set of tropical curves immersed in the plane R 2 . Finally, the single-vertex terms in (7.1) are uniquely characterised by the asymptotic behaviour: they are of order as R → ∞, uniformly as |Z + − Z − | → 0. For the sake of simplicity we will also assume that the lattice element β is primitive in Γ. Similar results hold in the non-primitive case but require keeping track of disconnected curves. Before tackling the general case, it is helpful to illustrate the asymptotic expansion for G T (R) in terms of the graphs C i starting with the simplest case when T is the decorated tree with a single edge γ → η. By definition, for Z + ∈ U + we have One way to compute δG T is to first rewrite G T (z * ; Z + , R) in terms of integrals over rays ℓ(Z − ) for a point Z − ∈ U − . Recall that X 0 (s) is holomorphic in C * , and that ρ(s, t) is a meromorphic function of t with a simple pole at s, with Res s ρ(s, t) = (2πi) −1 . By our choice of z * and the definitions of U ± , it follows that we can rewrite The last term (7.3) comes from the residue theorem when we push ℓ η (Z + ) over to ℓ η (Z − ), crossing the first integration ray ℓ γ (Z − ). Notice that we have the simple but crucial property It is this property that allows to relate G T to tropical curves in R 2 in the general case. In the present example, pushing ℓ γ (Z − ) in the residue term to ℓ γ+η (Z 0 ), and recalling that X 0 is continuous across the critical locus where Im Z(γ)/Z(η) = 0, we find where r → 0 as |Z + − Z − | → 0. The single-vertex term has asymptotics There is an obvious graph C which we can attach to the computation above, displayed in Figure 4. There are edges E 1 , E 2 labelled by the two factors in (7.2), and E 3 labelled by the residue term (7.3). These edges meet in a single vertex V , Figure 4. and come with attached integral vectors α(E 1 ) = γ, α(E 2 ) = η and α(E 3 ) = γ+η. It is natural to think of E 1 , E 2 as incoming in V , and E 3 as outgoing from V . Keeping track of this orientation, we have the balancing condition −α(E 1 ) − α(E 2 ) + α(E 3 ) = 0. 7.3. Expansion for G T (z * ; Z + , R) across critical locus. We will show that the simple analysis above can be carried out in general, for an arbitrary G T (z * ; Z, R) function, up to leading order terms as R → 0. Fix a decorated tree T as in Section 4.6, with W T (Z + ) = 0. The precise form of the expansion (7.1) depends on a choice of total order for the vertices of T . We simply fix one such total order, without assuming that it is compatible with the natural orientation of T as a rooted tree (i.e. flowing away from the root). It will be useful to introduce the notion of a totally ordered tree T ′ attached to an iterated integral I(T ′ ): we mean by this that there is a bijective correspondence between vertices of T ′ and factors of the form appearing in I(T ′ ), such that the factor appears if and only if there is an arrow i → j in T ′ . Notice that in particular T is attached to the iterated integral G T (z * ; Z + , R) in this sense. Remark 7.1. In (7.4) we allow z i ∈ ℓ, i.e. we allow factors of the form where z i ∈ ℓ. However in this case (7.4) will be decorated with the direction in which ℓ ′ approaches ℓ, using ℓ ′ → ℓ ± for the clockwise (respectively counterclockwise) direction. Let T ′ be a tree which is attached to an iterated integral in the sense above. We will construct from T ′ a finite set of trees S(T ′ ) of the same type, obtained by applying the residue theorem. To save some space, we set X 0 α (z) = X 0 (z; Z + , R)(e α ). In the following, we say that a ray ℓ separates ℓ 1 , ℓ 2 if ℓ 1 , ℓ 2 lie in different connected components of the complement of ℓ in the sector between ℓ γ (Z + ), ℓ η (Z + ). We allow the limiting case in which ℓ 1 → ℓ in a component which does not contain ℓ 2 , or possibly ℓ 1 → ℓ and ℓ 2 → ℓ in different components. Consider the set of vertices j ∈ T ′ for which one of the following occurs: (1) the corresponding factor in I(T ′ ) has the form where α(i) is a positive multiple of γ or η, or (2) it is of the form for some ray ℓ ⊂ C * which is not one of ℓ α(j) (Z ± ). In fact we will see (inductively) that there is at most one vertex i of T ′ for which (2) holds. If the set of j satisfying (1) or (2) is empty we simply set S(T ′ ) = {T ′ }. Otherwise we choose the first element j in this set (with respect to the total order of T ′ ). As T is rooted, there is at most one arrow i → j, and possibly several arrows j → k. Since j satisfies (1) or (2), the factor of I(T ′ ) corresponding to j fits into where ℓ is either ℓ + α or a ray which is distinct from ℓ − α , and h → i. If none of the rays ℓ α(i) (Z ± ) and ℓ α(k) (Z − ) separate ℓ and ℓ α (Z − ), we set S(T ′ ) = {T ′′ }, with T ′′ = T ′ and I(T ′′ ) obtained from I(T ′ ) by replacing ℓ in the factor above with ℓ − α(j) . Otherwise we apply Fubini and rewrite the integral above in the form The function is holomorphic in the variable z j ∈ C * \{z i , z k }, and has simple poles at z i , z k with residues given respectively by −(2πi) −1 ρ(z i , z k )X 0 α (z i ) and (2πi) −1 ρ(z i , z k )X 0 α (z k ). If we apply the residue theorem (justified by the estimates of integrals along an arc given in section 4.4) we can rewrite (7.5) as It is understood that the term (7.7) is only present if ℓ α(i) (Z ± ) is in fact ℓ α(i) (Z − ) and ℓ α(i) (Z − ) separates ℓ and ℓ α (Z − ), while a term (7.8) appears for each ℓ α(k ′ ) (Z − ) separating ℓ, ℓ α (Z − ). The signs in (7.7), (7.8) are determined according to whether ℓ moving to ℓ α (Z − ) crosses ℓ α(i) (Z − ) (respectively ℓ α(k ′ ) (Z − )) in the clockwise, respectively counterclockwise direction. We define trees T ′′ in S(T ′ ) to in bijection with the terms (7.6), (7.7), (7.8). There is an obvious (rooted, decorated, totally ordered) tree T ′′ attached to each of these integrals, whose underlying bare tree is given simply by contracting the edge i → j in T . By construction, the condition (2) can happen for at most a single vertex of T ′′ . Starting from our original pair of T and I(T ) = G T (z * ; Z + , R), by construction the sequence of sets S(T ), S(S(T )), . . . stabilises after a finite number of steps; we let S (p) (T ) denote the first set for which S (p) (T ) = S (p+1) (T ). This finishes the construction of the expansion (7.1). Indeed G T (z * ; Z + , R) is a sum of terms which are in bijection with elements of S (p) (T ), and these all have the form The single-vertex terms in (7.1) are in bijection with trees T p ∈ S (p) (T ) which contain a single vertex. 7.4. Highest order terms. We characterise the single-vertex terms in (7.1) by their asymptotic behaviour. Let T p denote a tree in S (p) (T ) with a single vertex. Then by construction for a unique sign attached to T p by orientations in the residue theorem, and where β ∈ Γ is given by the sum of all the lattice elements attached to the vertices of T . By the results of section 4.4 we have an expansion as R → 0 which holds uniformly as |Z + − Z − | → 0. Suppose now that T 2 ∈ S (p) (T ) is a tree which contains more than a single vertex. We claim that this is subleading, i.e. there is an expansion of the form for some function f such that Indeed by construction in this case we have where the tree T 2 contains an extra vertex 0 mapping to the root of T 2 , with z 0 = z * . Iterating the argument in section 4.4 sufficiently many times, one can then show that This decays faster than (2|Z This follows immediately from our assumption that β is primitive in Γ (using that Z − is nondegenerate). 7.5. Tropical graphs attached to highest order terms. It is now a simple matter to show that a tree T p ∈ S (p) (T ) containing a single vertex determines a graph C containing only 1-valent andd 3-valent vertices, and whose edges are decorated by elements of Γ. We will show that this extra data satisfy two relations, which imply that C i is the combinatorial type of a tropical curve in R 2 (this notion will also be recalled). The tree T p determines a unique sequence of trees T r ∈ S (i) (T ), r = 0, . . . , p, its ancestors, with T 0 = T . Moreover there are natural maps between the set of vertices ϕ r : T 0 r → T 0 r+1 , such that ϕ r is either a bijection, or maps two vertices i 1 , i 2 to the same vertex i ∈ T 0 r+1 (and is a bijection on T 0 r \ {i 1 , i 2 }). The set of vertices r T 0 r and gluing maps {ϕ r } define a graph C, whose internal vertices are either 2-valent or 3-valent. Let V be a 3-valent vertex of C, corresponding to a vertex of T r . By construction, this determines a unique factor of the form (7.6) in I(T r−1 ), and V corresponds to a unique nonzero residue term of the form (7.7) or (7.8). This means that there there is a natural choice of incoming edges E 1 , E 2 , respectively an outgoing edge E 3 . The edges E i come naturally with vectors α(E i ) ∈ Γ: in the notation of (7.6) -(7.8) these are given by (α(i), α, α(i) + α), respectively (α(i), α, α(k ′ ) + α). Thus we always have the balancing condition Notice that the balancing condition is a direct consequence of the residue theorem and the property X 0 (z ′ ; Z, R)(e α 1 ) * X 0 (z ′ ; Z, R)(e α 2 ) = X 0 (z ′ ; Z, R)(e α 1 +α 2 ). At the same time we see that α(E 1 ), α(E 2 ) ∈ Γ are linearly independent over Q, otherwise the residue term with lattice element α(E 3 ) = α(E 1 )+α(E 2 ) would not appear in (7.7) -(7.8). Also, if E and E ′ are edges of C which are respectively outgoing and incoming to 3-valent vertices V, V ′ , we must have α(E) = α(E ′ ) (since no application of the residue theorem separates V, V ′ ). Finally we define a graph C obtained from C by forgetting all the internal 2-valent vertices. 7.6. Rational tropical curves in R 2 . We have attached to our original T , G T (z * ; Z + , R) a finite collection of graphs C i with 3-valent internal vertices, one for each single-vertex tree in S (p) (T ) or, equivalently, one for each leading order term in the expansion Each C i comes with the extra data of a decoration of its edges E by elements α(E) ∈ Γ, satisfying the above conditions of balancing (7.11) and linear independence. The extra data say precisely that C i is the combinatorial type of a rational tropical curve immersed in R 2 . Following [GPS] section 2.1, we define plane rational tropical curves as immersions of certain graphs in R 2 . Let C denote a connected graph with only 3-valent internal vertices. We suppose that C is weighted, i.e. we have the extra data of a positive number w(E) for every edge E of C. We write C as well for the topological model of the graph, and C o for the topological space obtained by removing all 1-valent (external) vertices. A parametrised tropical curve in R 2 is a proper map h : C o → R 2 , such that for all E, the map h| E is an embedding into an affine line of rational slope, and for which the following balancing condition folds. At each image of a vertex h(V ), we have well defined primitive vectors m i ∈ Z 2 pointing out of h(V ) along the directions of the incident edges E 1 , E 2 , E 3 . Then one requires A rational plane tropical curve is then defined as the equivalence class of maps h up to isomorphisms of the domain graph. Following [GM] section 2, the combinatorial type of a tropical curve is defined as the data of the underlying graph C, together with the vectors m i for each internal vertex V . It is now clear that each of our graphs C i is the combinatorial type of a class of tropical curves. As an example, consider the tree and fix the unique total order of vertices which is compatible with the orientation. Then the expansion for G T contains two leading order terms, labelled by the tropical types C 1 , C 2 of Figures 5, 6. Figure 5. The tropical type C 1 7.7. Tropical invariants. Just as for plane algebraic curves, there is a natural notion of degree for a plane rational tropical curve (C o , h) as above, see e.g. [GM] section 2: this is just the unordered collection of vectors −w(E i )m i ∈ Z 2 and w(E out )m out attached to all the external edges of C. Notice that we allow w(E i ) > 1 for some or all the external edges. The enumerative theory of plane tropical curves of fixed degree through the expected number of general points is well established in all genera (going back to Figure 6. The tropical type C 2 the foundational work of Mikhalikin [M], see [GM] for a result in the generality we need here). We will only be concerned with a very special enumerative invariant, which is described in detail in [GPS] section 2.3. Choose l 1 general lines d 1j with the same (positive, primitive) direction d 1 , respectively l 2 general lines d 2j in the direction d 2 . We attach a positive integral weight w ij to the line d ij . Look at the set of parametrised plane rational tropical curves (C o , h) having a collection of unbounded edges E ij , E out , such that h(E ij ) ⊂ d ij and w(E ij ) = w ij . By the balancing condition, the degree of these curves is determined by a weight vector w = (w 1 , w 2 ), where each w i is the collection of integers w ij (for 1 ≤ i ≤ 2 and 1 ≤ j ≤ l i ) such that 1 ≤ w i1 ≤ w i2 ≤ · · · ≤ w il i . By the general theory, for generic d ij the number of isomorphism classes of parametrised curves (C o , h) as above is finite. Counting these tropical curves with the multiplicity of tropical geometry yields a number N trop (w) ∈ N >0 , which is invariant under deformation of the constraints d ij . Recall that the tropical multiplicity µ V at a 3-valent vertex V ∈ h(C o ) with associated primitive vectors m i is defined as |w(E i )m i ∧ w(E j )m j | for i = j (this is well defined by the balancing condition). The multiplicity of (C o the product over all 3-valent vertices. As an example N trop ((1, 1), (1, 2)) = 8 is computed by Figure 7. Notice that for the choice of constraints d ij displayed in the figure two combinatorial types appear: a curve of type C 1 and two curves of type C 2 . ((1, 1), (1, 2)) = 8 Remark 7.3. Although we only defined the tropical invariants N trop (w) for twocomponents weight vectors w, as explained in [GPS] section 2.3, there is an obvious extension to an arbitrary number of components (with corresponding directions for the infinite ends). 7.8. Highest order terms and tropical invariants. In the rest of this section we will relate the (combinatorial types of) tropical curves C i constructed above to actual tropical invariants. The C i attached to a single T , G T (z * ; Z + , R) all have the same tropical degree w, which we will sometime denote by deg(T ). The component w 1 (w 2 ) can be identified with the set of multiples of γ (respectively η) in the set of all decorations α(i) (in particular, w is independent of the arbitrary choice of a total order of vertices). Recall also that C i comes with a distinguished sign ε(C i ) = ±1 (rather than a multiplicity), uniquely determined by the residue theorem through (7.7) -(7.8). It is natural to consider the set of all trees T defining the same degree w, and to try and relate the sum T i ε(C i (T )) to N trop (w). Indeed the following holds. Theorem 2.3 (precise statement) The sum over trees T with W T (Z + ) = 0 (i.e. decorated by positive multiples of γ or η) equals the tropical invariant N trop (w), times the combinatorial factor in Γ ⊗ Q given by Our proof is not direct, but relies instead on the methods of [GPS] section 2. 7.9. Tropical types and stability data. The functions X(z; Z ± , R) induce flat sections of ∇(Z ± , R) on a supersector Σ for ∇(Z − , R), with the same asymptotics as z → 0 (uniformly as R → ∞). Since the connections ∇(Z ± , R) glue, choosing z * ∈ Σ, when |Z + − Z − | → 0 we must have X(z * ; Z + , R) − X(z * ; Z − , R) → 0, uniformly as R → ∞. By ?, the same must be true for the difference (7.12) Let T be a tree with W T (Z + ) = 0 as usual. We have W T (Z + ) = W T (Z − ) in this case. This follows since DT(hγ, Z + ) = DT(hγ, Z − ), and similarly DT(kη, Z + ) = DT(kη, Z − ). To check (for example) the first statement notice that for singlevertex trees we have as |Z + −Z − | → 0, uniformly as R → ∞, and by Theorem ? it cannot be cancelled by some other term in (7.12). Let us go back to the difference (7.12). Pick a primitive β ∈ Γ. By the expansion (7.1), the e β component of the first summand contains a distinguished sum of highest order terms which is uniquely characterised by its asymptotics as |Z The unique term in the second summand of (7.12) with matching asymptotics is We have proved ). (7.13) 7.10. Refinement. Consider the set of all trees with W T (Z + ) = 0, i.e. decorated with positive multiples of γ, η, and with total decoration i α(i) = β. In the previous section, we related the sum of the signs ε(C i (T )) attached to tropical types over all such trees to the stability data, that is the quantity DT(β, Z − ). Fix a weight vector w such that β = |w| 1 γ + |w| 2 η. We need to prove a more refined result, given a similar link between stability data and the sum To achieve this we consider a larger lattice Γ mapping to Γ. Denoting by l i the length of w i , we take Γ to be generated by elements γ 1 , . . . , γ ℓ 1 and η 1 , . . . , η ℓ 2 such that γ i , γ j = η i , η j = 0, γ i , η j = 1. The map π : Γ → Z 2 is given by π(γ i ) = γ, π(η j ) = η. There is of course a pullback family of elements of Hom(Γ, C) induced by our family Z; we will suppress the pullback in our notation. We look at the unique continuous family of stability data on g Γ which correspond to setting Ω(γ i , Z + ) = Ω(η j , Z + ) = 1, with all other Ω(α, Z + ) vanishing. The analogues of the asymptotic expansion (7.1), the construction of tropical types and of the argument leading to (7.13) are straightforward; the only difference is that we consider now trees T which are labelled by positive multiples of γ i , η j . We still write i α(i) for the sum of all decorations of T . Thus we have This is still not enough for our purposes. We need to impose the condition that the trees over which we sum have precisely l 1 + l 2 vertices. This is possible if we consider a formal version of our stability data setting Ω(γ i , Z + ) = Ω(η j , Z + ) = ǫ, with all other Ω(α, Z + ) vanishing. We can think of ǫ as a formal parameter or as an arbitrary rational number. The setup is unchanged, except that W T and DT(β, Z − ) will now be polynomials in the variable ǫ. Therefore where DT(β, Z − )[ǫ l 1 +l 2 ] denotes the coefficient of the monomial ǫ l 1 +l 2 . This is the refinement we need. Given a weight vector w as above, we construct an element β asβ The set of trees T such that W T (Z + ) = 0, iᾱ (i) =β and |T 0 | = l 1 + l 2 is precisely the set P of rooted trees with l 1 + l 2 vertices decorated by {w 11 γ 1 , . . . , w 1l 1 γ l 1 , w 21 η 1 , . . . , w 2l 2 η l 2 }. There is a forgetful map from P to the set P of rooted trees T decorated by elements of Γ, with W T (Z + ) = 0 and deg(T ) = w, given by replacing γ i with γ and η j with η. This is clearly onto. For T mapping to T , we have (after Q-linear extension of π) We also have ε (C i where the latter is computed with respect to the total order induced from T . On the other hand, the fibre of P → P over T contains (Aut(T )) −1 Aut(w) trees. Applying π to both sides of (7.14) proves ). (7.15) 7.11. Application of a result of [GPS]. In the last step of the proof we relate the stability data DT(β, Z − )[ǫ l 1 +l 2 ]ǫ l 1 +l 2 to the tropical count N trop (w). This is where the techniques of [GPS] section 2 are required. By its very definition, DT(β, Z − )[ǫ l 1 +l 2 ] admits the following description. Consider the ordered factorisation problem in Aut( g Γ ) given by j Ad exp(−ǫ Li 2 (e η j )) i Ad exp(−ǫ Li 2 (e γ i )) = → Ad exp(−Ω(ᾱ; Z − )(ǫ) Li 2 (eᾱ)) (7.16) whereᾱ = ℓ 1 i=1 a i γ i + ℓ 2 j=1 b j η j and we are writing the operators from left to right in the clockwise order of Z + (ᾱ) = Z + (α), for α = π(ᾱ). It is straightforward to check that, by the definition of Γ, operators supported on the same ray commute (even if Z − is degenerate) so (7.16) is well posed and admits a unique solution. To compute this, we compare (7.16) with an ordered factorisation problem for automorphisms of a different algebra. As an intermediate step, let R denotes the formal power series ring R = C[[s 1 , . . . , s ℓ 1 , t 1 , . . . , t ℓ 2 ]]. If we notice that (7.16) is equivalent to the factorisation problem over Aut(g ⊗ R) (7.17) in the following sense: DT(α; Z − )(ǫ) will now be a polyomial in the variables s i , t j (as well as ǫ), and in fact +l 2 ] appears as the coefficient of the monomial ε l 1 +l 2 (s, t)β in the polynomial DT(β, Z − )(ǫ). ǫ q e qγ s q i . In the notation of [GPS] section 1 (p. 312), we have with all other a jkl , a ik ′ l ′ vanishing. For α ′ primitive, the automorphism Let us go back to ourβ ∈ Γ with β = π(β) primitive. According to [GPS] Theorem 2.8, the coefficient of the monomial (s, t) β e β in log f β admits a tropical description: it equals the sum The invariant N trop (w ′ ) here is computed for a generic choice of constraints d in with the same direction, and similarly d jm with the same direction. The components w ′ in , w ′ jm can be arbitrary increasing collections, satisfying only the condition above. However, we can refine our calculation further by looking only at the coefficient of the monomial ǫ l 1 +l 2 (s, t) β e β in log f β . By the specific form of the coefficients a jpp , a iqq this coefficient is given by the sum over weight vectors w ′ for which the collections w ′ in , w ′ jm contain a single element. There is precisely one such w ′ , given by w ′ = ((w 11 ), . . . , (w 1l 1 ), (w 21 ), . . . , (w 2l 2 )). Clearly, | Aut(w ′ )| = 1, and as w ′ is just a subdivision of w (the type of constraints is the same), we have Thus the coefficient of the monomial ǫ l 1 +l 2 (s, t) β e β equals On the other hand, we know already that this coefficient is precisely DT(β, Z − )[ǫ l 1 +l 2 ]. We have proved (7.18) 7.12. Comparison. Comparing our formulae (7.15), (7.18) gives the promised connection between the tropical types attached to flat sections and actual tropical counts, where we are summing over trees T with W T (Z + ) = 0, i.e. decorated by positive multiples of γ or η. This equality holds in Γ ⊗ Z Q. We can make the relation a bit more explicit. Indeed one has | Aut(w)| (|w| 1 γ+|w| 2 η). 8. A comparison with tt * -type connections 8.1. tt * -like picture. Suppose, for simplicity, that Γ has even rank 2r and −, − is nondegenerate. The important physics paper [GMN1] considers families of stability data on g parametrised by certain special submanifolds B ⊂ Hom(Γ, C) of complex dimension r ("Coulomb branches of pure N = 2 field theories on R 3 × S 1 R "). Starting from these data the authors propose a construction of a family of meromorphic connections ∇ GM N (Z, R) on P 1 , formally very similar to Dubrovin's tt * -connections, parametrised by B and R > 0. These connections take values in the Lie algebra X of complex-valued smooth vector fields on the compact torus Γ ∨ ⊗ Z U (1). The ∇ GM N (Z, R) have precisely the same form as ∇(Z, R), but with A (i) ∈ X. The family ∇ GM N (Z, R) should be isomonodromic, with Stokes data given essentially by ℓ γ,Z (γ ∈ Γ prim ), p≥1 T Ω(pγ,Z) pγ . The latter point is a bit tricky to make sense of, and it involves thinking of local flat sections X γ i (z) of ∇ GM N (Z, R) as maps Γ ∨ ⊗ Z U (1) → Γ ∨ ⊗ Z C * , on which the (birational) torus automorphism inducing T Ω(pγ,Z) pγ ∈ Aut * ( g) acts by acting on the target (here we regard g as functions on Γ ∨ ⊗C * ). Physical arguments predict that the connections ∇ GM N (Z, R) should admit a distinguished set of local flat sections such that log X γ i (z; Z, R) gives local holomorphic Darboux coordinates for a global, complete hyperkähler manifold (M, g R ). In the examples coming from pure N = 2 theories, (M, g R ) should be a Hitchin system on P 1 with suitable irregular singularities. There is a well-known Hitchin torus fibration M → B, and R −r should be identified essentially with the volume of the fibres. 8.2. Ooguri-Vafa tt * -type connections. The above tt * -like picture has been established rigorously in a few local (incomplete) cases, the basic example being as above that of Γ ∼ = Z 2 generated by γ, η with γ, η = 1, and symmetric stability data given by Ω(±γ) = 1 with all other Ω vanishing. There are by now a good number of mathematical references for this construction (see e.g. [C]), so we do not reproduce it in detail, but only give a brief reminder, and also collect some formulae we need later on. We write θ γ , θ η for the angular variables on Γ ∨ ⊗ Z U (1) dual to γ, η. 8.3. Relation to hyperKähler metric. Remarkably, one can show that the family of complex-valued two-forms are holomorphic symplectic forms for a hyperKähler structure on M o = Γ ∨ ⊗ Z U (1) × B o . (Here of course we write d for the differential on M o ). A crucial ingredient for this is that the family ω(z) has simple poles at both 0 and ∞, which can be traced back to the double poles at 0 and ∞ for ∇ GM N (i.e. its tt * form). The underlying hypeKähler metric has a U (1)-symmetry (shifting θ η ), and therefore by general theory it can be written in Gibbons-Hawking form, i.e. in terms of a positive harmonic function V and a connection 1-form A with F (A) = * dV , x = Re(u), Im(u), θ γ 2πR . We will need explicit formulae for the potential: where, according to the footnote in [GMN1] p. 22, the regularization constants are given by for some choice of complex parameterΛ. The parameters Λ, R and the regularisation constantsΛ are constrained by By construction, V is periodic in θ γ , and the metric is easily seen to extend to 0 < |u| < |Λ|. However, it extends even across u = 0: the standard way to see this is to compare it with the Taub-NUT hyperKähler metric near the origin. The resulting incomplete hyperKähler metric over the disc of radius |Λ| can be seen as a metric on a neighbourhood of a nodal elliptic curve, and is known as Ooguri-Vafa metric. The potential V is closely related to the Bessel integrals appearing in ∇ GM N : one can show that 8.4. Comparison morphism. Finally, let us show that in this special example there is a Lie algebra morphism X → D * ( g) taking ∇ GM N To define Φ, we choose affine symplectic coordinates on the torus θ γ and θ η (so that ω = −dθ γ ∧ dθ η ) and use the Fourier expansion of an element in a. Setting Φ(e ikθγ +inθη ) = e kγ+nη = e kγ * e nη , where η ∈ Γ = H 1 (T, Z) (resp. γ) is the dual of dθ γ (resp. dθ η ), we have Φ({e ikθγ , e inθη }) = Φ(kne ikθγ+inθη ) = [e kγ , e nη ] and hence Φ extends to an homomorphism. (Note that γ, η = 1). As usual we write D * ( g) for Lie algebra of derivations of the commutative algebra ( g, * ); the derivations of g which are Poisson (i.e. also satisfy the Leibniz rule) are denoted by D( g) ⊂ D * ( g). Notice that there is a distinguished sub-Lie algebra ad( b) ⊂ D( g) generated by Hom(Γ, C) and g (the latter acting via the adjoint representation). The reason for the notation ad( b) will be clarified soon. Notice also that D(a) ⊂ X can be identified with the complexification of the Lie algebra of symplectic vector fields, and so admits a decomposition D(a) = a ⊕ h where h = Hom Z (Γ, C). We claim that this implies that for any D ∈ D(a) there exists (very likely unique) D ′ ∈ ad( b) such that ΦD = D ′ Φ. As Φ is a Poisson Lie algebra morphism, it is enough to check this for an element Z ∈ h. As a derivation, Z is identified with a complex vector field of the form Z = a∂ θγ + b∂ θη for a, b ∈ C. Then, one can easily check that ΦZ = i(a∂ γ + b∂ η )(Φ(−)). We now calculate the connection D ′ related via Φ with the D * (a)-valued connection ∇ GM N . In fact, a little thought shows that in this special case ∇ GM N is in fact D(a)-valued, so D ′ will be in D( g). We will show that actually D ′ lies in ad( b), and indeed Φ∇ GM N (Z) = ∇ sym (−πZ). 9.2. We first compute the limit lim R→0 A (0) z (R). By definition, for R > 0 1 n e inθγ is the the Fouries series of the purely imaginary, periodic L 2 (S 1 ) seesaw function − log(e iθγ ). Using (9.1), (9.2), dominated convergence in L 2 (Z), and Plancharel's theorem, we see that lim The decomposition for fixed Z, is badly behaved as R → 0. The second piece of this decomposition diverges pointwise as R → 0. To see this recall that Λ, R and the regularisation constants κ n are related by κ n = 1 On the other hand, we have Thus the quantity Z · ∂ θ in (9.3) is also implicitly a function of R. If we want this to be constant, or more generally to have a well defined limit as R → 0, we need Λ(R) to converge to a finite limit Λ 0 = 0 as R → 0. By the above relation, this implies thatΛ must also be chosen to be a function of R, with |Λ| → 0 as R → 0. But then κ 0 = |Λ(R)| −1 becomes divergent as R → 0. So V diverges pointwise when R → 0, and in fact since n =0 e inθγ K 0 (2πR|na|) = − 1 2 log a Λ + logā Λ − V, the second piece of (9.3) diverges pointwise. This argument seems to show a way out: we can simply chooseΛ to be a nonzero constant. Then as R → 0 the quantity R −1 V converges pointwise to the function of θ γ ∈ (0, 2π) given by the sum of series 1 4π If we chooseΛ to be a nonzero constant (independent of R) as above so that R −1 V (R) has a finite limit as R → 0, then R −1 A (−1) z (R) does have a finite, nontrivial R → 0 limit, namely This follows from rewriting (9.3) in terms of V , Of course in this case Z is also a function of R, which does not have a limit as R → 0 since |Λ(R)| → ∞ as R → 0. In the light of the physical interpretation of |Λ|, we may say that we got rid of the R → 0 divergency in R −1 A (−1) z by a redefinition of the "energy scale". 9.4. Distributional point of view. For the sake of completeness we also give a simple distributional argument for the above regularisation result, without appealing to an explicit choice of regularisation constants (for which we do not have a purely mathematical reference). For all R > 0, we have an equality of distributions f (θ γ , R) = 1 2π n =0 e inθγ K 0 (2πR|nu|) = − 1 2π log(R) n =0 e inθγ + 1 2π n =0 (K 0 (2πR|nu|) + log(R))e inθγ . We wish to interpret Υ 0 (R) as a sequence of complex gauge transformations, i.e. elements in the complexification of the "gauge group" Diff(S 1 × S 1 ). In general, this is a difficult notion to make sense of; in our case we only need to give a meaning to the action of Υ 0 (R) on the tt * -type connection. The natural choice is to define Υ −1 0 ∇ GM N Υ 0 = d − (Υ 0 ) * A z dz, i.e. to push forward the complex-valued vector field A z by (Υ 0 ) * . This means that we need to allow connections which take values in the Lie algebra of complexvalued vector fields on S 1 × C * . We will check that Υ 0 extends to an element of Diff(S 1 × C * ) and A z extends to a vector field on S 1 × C * . Indeed recall that the coefficients of the complex vector fields A to smooth complex vector fields on S 1 × C * simply by extending ∂ θη to i(w∂ w −w∂w) (we still 9.6. The limit connection lim R→0 Υ 0 (R) · ∇ GM N t (R) is not smooth, but takes values in L 2 periodic vector fields. Denoting by X, we can extend the morphism Φ to a map X → D * ( g) by using the Fourier expansion of periodic L 2 vector fields. There is no Poisson bracket on X (the product of L 2 functions is not in general L 2 ), however the extension of Φ preserves the bracket whenever it is defined. For D ∈ X we have ΦDφ = Φ(f ∂ θγ φ) + Φ(g∂ θη φ) for any smooth φ smooth, as in this case f ∂ θe φ and g∂ θm φ are in L 2 . Recall that the dilogarithm function in g is Li 2 (e α ) = − k≥1 e kα k 2 . Then one can show Φ lim 2πi .
TODA-YAMAMOTO CAUSALITY TEST BETWEEN ENERGY CONSUMPTION AND ECONOMIC GROWTH: EVIDENCE FROM A PANEL OF MIDDLE EASTERN COUNTRIES In this paper, we examine the intertemporal causal relationship between economic growth and energy consumption in the selected sixteen the Middle East and North Africa countries by annual data (1985–2016). Unlike the majority of the previous studies and as an alternative to the conventional method of having the same integration of time series and large samples, the Autoregressive Distributed Lag (ARDL) bounds test approach and causality analysis were applied by Toda & Yamamoto (1995). The results of the bounds test show that there is a stable long run relationship between economic growth and total final energy consumption. On the other hand, the results of the causality test, show that there is a unidirectional causal flow from economic growth to total energy consumption that energy conservation policies may not unfavourable effects on economic growth. Overall, these countries meet the conservation hypothesis which means that the causal aspect is unidirectional from economic growth to total final energy consumption and that energy conservation policies will have little or no negative impact on growth in these energy-dependent countries. 57 economic growth. Key factors in determining energy consumption include economic growth, population and urbanization (Kazim, 2007). In general, the fact that the energy consumption increased as the size of the economy increased, the oil shock began to pay attention to the causal relationship between economic growth and energy consumption after the economic recession. Consequently, many scientists have studied the relationship between GDP and energy consumption (Tsani, 2010;Asafu-Adjaye, 2000;Erol and Eden, 1990). Also, countries with high per capita GDP have been found to have high per capita energy consumption (Burney, 1995;Leach and Gowen, 1987;Aimer, 2018Aimer, , 2016. Is economic development more important than energy consumption? or is energy itself a stimulus for economic development? All these questions have aroused and motivated curiosity and interest among economists and policy analysts over the last decade to study the direction of causality between energy consumption and economic variables such as GNP, GDP, income, employment or energy prices (Murry and Nan, 1990;Glasure and Lee, 1998;Cheng and Lai, 1997;Masih and Masih, 1997;Asafu-Adjaye, 2000;Yang, 2000). This is because the direction of causality has significant implications for economic policy. For example, some studies have indicated a two-way causal relationship between GDP and energy consumption (Hwang and Gum, 1991), while others have found a one-way causal relationship (Cheng and Lai, 1997); others have found that there is no correlation between the energy consumption and GDP (Yu and Choi, 1985). The finding of a oneway causality running from energy consumption to GDP means an energy-dependent economy such that energy consumption is a stimulus for GDP growth, implying that lack of energy consumption can negatively affect economic growth or can cause weak economic performance. However, a continuous causality running from GDP to energy consumption means a less energy consumption-dependent economy such as energy conservation policies (eg rationing electricity) can be implemented with little or no harmful effect on the level of economic activity (GDP). If there is no causality between energy consumption and GDP, referred to as Yu & Choi (1985) neutrality assumptions, which implies that energy consumption is not correlated with GDP, and, as such, energy conservation policies can be pursued without compromising the economy. Scholars have started late on the relationship between GDP and energy consumption, mainly using time series and panel data research methods (Wang et al. 2011;Chaofeng and Weizhong, 2005). In this context, some research results show that GDP is the cause of energy consumption, and there is a long-term correlation between GDP and energy consumption and energy structure; in terms of shortterm impact, energy consumption seriously restricts the development of regional economy, but this restrictive effect It will weaken as the economic level continues to increase. Some studies have also concluded that energy consumption has a positive effect on GDP and shows a two-way causal relationship with GDP, but does not have long-term equilibrium. Within this framework, our study aims to contribute to the current debate on the importance of energy consumption to economic growth. we use the ARDL bounds test approach and Toda & Uygulamalı Ekonomi ve Sosyal Bilimler Dergisi / Journal of Empirical Economics and Social Science Cilt/Volume: 3 Sayı/Issue: 1 Mart/March 2021 ss./pp. 56-78 N.A. Moftah, S. Yamamoto's causality in which we integrate energy production, as well as energy consumption and GDP. For this purpose, the remainder of the research will be allocated: The second section presents, the theoretical framework for reviewing the literature related to the research topic. With regard to Section III, the methodology of the model used for the study. The final section will be devoted to interpreting the results and recommendations. LITERATURE REVIEW The analysis of the relationships between economic growth and energy consumption involves an account of the different theoretical studies and related empirical work. This review of literature is very useful in that it allows us to learn about the work done in the field to know the evolution of thoughts on our subject, so it allows us to know the existing to build at better our analysis and detect the powerful techniques to approach our work. Some economic analysts have found that the relationship between economic growth and energy consumption can be classified into four categories: hypothesis of conservation, hypothesis of growth, hypothesis o hypothesis f feedback, and hypothesis of neutral (Payne, 2010;Ozturk, 2010;Apergis and Payne, 2009;Squalli, 2007;Chen et al. 2007;Yoo, 2005;Jumbe, 2004;Kesgingöz and Dilek, 2016). First, the conservation hypothesis is the claim that economic activity increases energy consumption. Kraft and Kraft (1978), which attempted to analyze the relationship between energy consumption and economic growth for the first time, analyzed the US GNP and energy consumption using the Sims test from (1947Sims test from ( -1974 and found that economic growth leads to higher energy consumption. Abosedra and Baghestani (1989) found that the study following Kraft and Kraft (1978) did not achieve the same results, and the Granger Causality Test was extended from 1947 to 1987, showing that economic growth leads to energy consumption. Tang et al. (2009) verified Grani's real (GDP) and energy consumption in China from and found that economic growth increases energy consumption. The implementation of energy conservation policies under the conservation hypothesis is actively supported. Second, the growth hypothesis is that energy consumption leads to economic growth, contrary to the hypothesis of conservation. Stern (1993) used a multivariate vector autoregressive (VAR) approach that included US energy consumption, GDP, labor and capital from 1947 to 1990, energy consumption is driving economic growth. Ho and Siu (2007) found that consumption of electricity leads to economic growth from 1966 to 2002 in the analysis of Hong Kong's electricity use and real GDP using covariance and vector error correction model. These findings suggest that energy conservation policies impede economic growth, so care must be taken in implementing the policies. Third, the feedback hypothesis is that energy consumption and economic growth do not have a unilateral causal relationship, but mutual effects. Hwang and Gum (1991) (2004) found that the relationship between Korean economic growth and energy consumption was found to be bi-directional causality between the two variables. Oh & Lee (2004) extended the study of Stern (1993) found that the vector autoregressive model using the difference was not enough to judge the relationship between two variables in the long run. Even in this case, energy consumption conservation policy affect growth, so caution is required in implementing the policies. 59 Fourth, the neutrality hypothesis (energy consumption is not relevant in economic growth). Akarca and Long (1980) argued that Kraft and Kraft (1978) study selected an unstable period due to an analysis that included an oil shock period, and estimated that Sims (1972) method reduced the analysis period by two years. The result is that energy consumption is not relevant in economic growth. Eden and Hwang (1984) also extended the analysis period and conducted a Sims test on US GNP and energy consumption from 1974 to 1979. Similarly, no causal relationship was found between the two variables. In order to solve the problem of small samples of previous studies, Eden and Jin (1992) conducted a cointegration analysis using monthly data with an emphasis on long-term equilibrium, indicating that there is no causality between the two variables. Unlike previous studies analyzing the relationship between energy consumption and economic growth in specific countries, Masih and Masih (1996) This study is distinguished from previous studies, as it is the first of its kind, from the viewpoint of researcher, which examines the impact of total final energy consumption on the growth of selected MENA countries and the introduction of some economic factors not addressed in previous studies at the level of these countries, such as total final energy consumption and energy production. The gap in the literature review is that there are no studies which examine the relationship between GDP, energy consumption and energy production within the one multivariate model. Although previous literature has exceeded a short-run horizon characterized by higher frequency and may accommodate duty cycle movements, this study constitutes a longer time horizon but the low frequency and the Toda Yamamoto test to find out the long-term causality between real (GDP) and energy consumption in a multivariate framework using data for Algeria, Egypt, Iran, Iraq, Kuwait, Jordan, United Arab Emirates (UAE), Bahrain, Lebanon, Libya, Morocco, Saudi Arabia, Syria, Oman, Qatar and Tunisia. Model and Data In this research, we collect cross-sectional data and time series to study the causal relationship between economic growth (real GDP) and independent variables such as total final energy consumption (TFEC) and energy production (EP), in addition, all time series are expressed in the form of a natural logarithm. We get the following Eq. (1): We use the real GDP (constant 2010 US $) as a measure of economic growth GDP (the dependent variable) while total final energy consumption (million metric tons of oil equivalent) (TFEC) and energy production (Million metric tons of oil equivalent) (EP) as independent variables. All theis variables are obtained from World Bank and International Energy Agency 1 . The relationship of co-integration we estimate is specified as in Eq. (2). β 1 and β 2 are elasticities of real GDP with respect to gross fixed capital formation, labour, renewables, and nonrenewables, respectively. ε is the error term. This research considers the relationship between real GDP and total final energy consumption for data set consists of balanced annual data of 512 observations for 16 MENA countries. We analyzed the model between 1985 and 2016, the longest time period for which data are available for the variables. we selected 16: Algeria, Egypt, Iran, Iraq, Kuwait, Jordan, UAE, Bahrain, Syria, Libya, Lebanon, Saudi, Morocco, Oman, Qatar and Tunisia. We excluded Kuwait from the study due to lack of renewable energy consumption data for the period. Panel Cooperatıon Tests All of the tests proposed by Pedroni (1999Pedroni ( , 1997 are based on residues from an equation as follows. The α parameter is a constant effect parameter that can be different between fixed or individual sections specific to the sections in the panel. Although mostly neglected, the term δ deterministic time trend specific to cross-sections in the panel can be included in the equation. Since the horizontal cross-section constant effects and cross-section time trends were not included in the equation, the critical values and asymptotic distribution were calculated, so the critical values for each case were calculated. The ARDL Bounds Test In this paper, Bounds test was used to test the long-term equilibrium relationship between the variables of equation (3). Engle and Granger (1987) co-integration tests and Johansen (1988) cointegration tests require the same integral order of time series and a large number of long-term data. The ARDL bounds test approach proposed by Pesaran et al. (1999), overcomes this problem. Bounds test could be implemented regardless of the integral order of time series and has the advantage of being robust to small samples. Narayan (2005) presents the thresholds for the (0) and (1) Toda-Yamamoto (TY) Causality The cointegration test verifies the existence of long-term equilibrium relationships between variables. If there is a cointegration relationship, a long run equilibrium relationship or ECM is applied. If there is no cointegration relationship, the short-term relationship is analyzed through the difference variable. In contrast, the Granger causality test is a way of examining whether one variable helps predict another. Traditional Granger causality testing methods should ensure the stability of time series data and the integration process should be clear. However, the effectiveness of the Granger causality test is poor if the time series integration process is different or unclear. As an alternative, Toda and Yamamoto (TY) method is used (Toda and Yamamoto, 1995). Causality test by Toda and Yamamoto (1995), require estimating the following VAR (k+dmax) model: Real GDP, Eq. (5): Total final energy consumption, Eq. (6): Where is natural logarithm of real GDP; represents natural logarithm of the total final energy consumption; is natural logarithm of energy production. The Variables Definition Our study covers 19 MENA countries, on a 16 country basis, and covers the period 1985-2016. Panel Unit Root Test In the panel data analysis, unit root tests are sensitive to the horizontal cross-sectional dependence properties that may be found between the variables. Therefore, horizontal cross-sectional dependence relationships between variables should be investigated in panel data models. Disregarding the features of horizontal cross-sectional dependence that may occur in variables or models may cause deviation estimates. In this context, the variables in the model and the horizontal cross-sectional dependence properties of the model were investigated by Levin et al. (2002); Maddala and Wu (1999); Im et al. (1997); Choi (2001). Table 1 shows unit root results, the statistics solidly confirm that the three series (lnGDP, lnTFEC, lnEP) are the first difference process. The results of this test are presented in Table 1. For all the variables, we cannot reject the null hypothesis of the absence of the panel unit root at the level. In I(1), this hypothesis is rejected for , and of the analysis. The test used to confirm that the series is stationary from the first differences, which leads us to conclude that the panel series are all integrated of order one or (1). The verification of stationarity properties for all panel variables leads us to study the existence of a long-term relationship between them. Panel Cointegration Estimation Energy consumption, economic growth, and energy production in long run relationships between variables will be investigated using Bounds testing method. Pesaran et al. (2001), the advantage of the ARDL boundstest approach, regardless of the degree of integration of variables between the variables whether there is a cointegration relationship is investigated. On the other hand, this method is considered suitable for three reasons. First; The bounds test procedure is easy and, unlike multivariable cointegration methods such as Johansen and Juselius (1990), the existence of a cointegration relationship is determined after the lag length of the model is estimated by the least-squares method. Second, the boundstest procedure does not require pretesting of the variables included in the unit root test model and can be applied only if the series in the model are I(2), I(0) and I(1), or all of them are mutually integrated I(1). Third; this method is considered suitable for the bounds test is more suitable for small samples compared with traditional cointegrating techniques (Haug, 2002). Compared to other cointegration tests, which assume that the series are stationary of the same order such as Johansen & Juselius (1990), bounds testing method by Pesaran et al. (2001) does not require the determination of the stationary levels of the variables. Therefore, this test can be used when all series are at the level or the first difference or a combination of these two conditions. It is only checked that the variables are stationary in the second order. On the other hand, this method is considered suitable for the bounds test is more suitable for small samples compared with traditional cointegrating techniques (Haug, 2002). The ARDL bounds testing method consists of three steps. In the first step, it is determined whether there is a cointegration relationship between these variables. In the second step, long-run coefficients are determined under the existence of cointegration, and in the last step, the estimation of short-term coefficients is started. Narayan (2005) (Narayan, 2005). In the determination of whether there are structural changes in the long-term coefficients of the ARDL models, CUSUM was performed for systematic changes in coefficients, and CUSUMQ tests were used for detection of sudden and random changes in coefficients (Brown et al., 1975). According to the stability tests (Figure 1), it was concluded that the parameters were stable because the curves of the error term remained within the confidence interval and no artificial variables were needed to maintain stability. The concerned government agencies seek to develop a mix of fossil fuels and renewable sources in a sustainable manner that allows the preservation of the depleted state's sources of oil and natural gas for future generations. Figure 1. Plot of CUSUM and CUSUMQ tests Econometric analysis indicates that no causal relationships can be estimated when there is no long-term equilibrium between variables. In the case of Kuwait, there is no equilibrium relationship, which indicates that there is no causality between the two variables. The existence of a cointegration relationship between GDP and total final energy consumption in Algeria, Jordan, Iran, Egypt, Iraq, UAE, Bahrain, Libya, Lebanon, Morocco, Saudi, Syria, Qatar, Oman and Tunisia suggests that there must be causality in at least one direction. Turning to the effect of energy production on GDP, we find that for Algeria, Iraq, Libya, Saudi Arabia, Oman and Tunisia, energy production has a positive and statistically significant impact on GDP. The magnitude of the impact ranges from 0.38 in the case of Tunisia to 1.27 in the case of Libya. For Jordan, energy production has a negative and statistically significant effect on GDP. The results show that the estimated coefficients of the error correction terms are all significant. These results indicate that all variables used in this research respond to deviations from long-run equilibrium. In terms of the speed of adjustment towards long-term equilibrium, we found that for the whole sample, the real GDP and energy production respond to deviations from long-run equilibrium. Toda-Yamamoto Causality Analysis This long run relationship between the two variables also shows that these variables have a causal relationship. The causality relationship between these variables was investigated by Granger causality based on Toda-Yamamoto method. The causality results are reported in Table 4. There is long run unidirectional causality running from energy consumption to GDP in Lebanon, Morocco, Syria and Oman. which means, an expansion in energy consumption increases GDP in Lebanon, Morocco, Syria and Oman. The growth hypothesis expresses unidirectiona causality from energy consumption to economic growth. Accordingly, while the increase in energy consumption leads to economic growth, decreases in energy consumption will have a negative impact on economic growth. This result corresponds to the results of Bhattacharya et al. (2017) which suggests that the deployment of renewable energy consumption and institutions has an important role to play in promoting growth and reducing CO2 emissions Mehrara's (2007) findings for eleven oil-exporting countries and Al-Iriani (2006) results for the 6 countries of the GCC. Both studies found that unidirectional causality emanates from energy consumption consumption to GDP. According to Lütkepohl (1982) in each of these studies, the results could be affected by the omitted variable bias due to the use of bivariate models. For Libya and UAE, bi-directional causality between GDP and energy consumption consumption. Also, an expansion in GDP increases consumption of energy in Libya and UAE. Furthermore, an increase in energy consumption results in higher GDP for Libya and UAE. In addition to the direct impact of energy used in commercial use that contributes to GDP, the increase in energy consumption leads to an increase in energy production, which means expanding in infrastructure and the use of energy. Overall, These countries meet the feedback hypothesis which means that consumption and energy consumption and economic growth are jointly determined and affect each other and indicate the presence of a bi-directional causality relationship between variables. These results are consistent with the study of (e.g. Amri, 2017;Kahia et al. 2016). There is a unidirectional causality running from GDP to energy consumption in Algeria, Egypt, Iraq, Bahrain and as well as for the panel as a whole. Finding neutrality or detecting unidirectional While there is no statistically significant relationship between energy consumption and GDP of the countries of Iran, Jordan, Saudi Arabia, Qatar and Tunisia. Once this hypothesis is confirmed, conservative or expansionary policies in renewable energies will have no impact on growth (Ozturk, 2010). The lack of a causal relationship between energy consumption and economic growth eliminates the possibility that energy conservation policies will adversely affect economic growth (Aytaç, 2010). CONCLUSIONS Today, energy has become the most important factor that shapes the world economy and policies. The main objectives of all countries are to produce more energy, to deliver the produced energy to more people, to ensure the development of poor countries and to leave a livable world to the next generations without damaging the environment. The increasing importance of energy requires an in-depth examination of the energy market. Analyzing the variables in the energy market is an important indicator for the policies to be determined by energy companies, consumers, governments, regulatory agencies and international organizations. Teşekkür: -Peer-review: Externally peer-reviewed. Conflict of Interest: The author has no conflict of interest to declare. Grant Support: The author declared that this study has received no financial support. The blue line represents GDP, while the red line represents energy consumption.
Nanostructured Iron Sulfide/N, S Dual-Doped Carbon Nanotube-Graphene Composites as Efficient Electrocatalysts for Oxygen Reduction Reaction Nanostructured FeS dispersed onto N, S dual-doped carbon nanotube–graphene composite support (FeS/N,S:CNT–GR) was prepared by a simple synthetic method. Annealing an ethanol slurry of Fe precursor, thiourea, carbon nanotube, and graphene oxide at 973 K under N2 atmosphere and subsequent acid treatment produced FeS nanoparticles distributed onto the N, S-doped carbon nanotube–graphene support. The synthesized FeS/N,S:CNT–GR catalyst exhibited significantly enhanced electrochemical performance in the oxygen reduction reaction (ORR) compared with bare FeS, FeS/N,S:GR, and FeS/N,S:CNT with a small half-wave potential (0.827 V) in an alkaline electrolyte. The improved ORR performance, comparable to that of commercial Pt/C, could be attributed to synergy between the small FeS nanoparticles with a high activity and the N, S-doped carbon nanotube–graphene composite support providing high electrical conductivity, large surface area, and additional active sites. Introduction The oxygen reduction reaction (ORR) is crucial for electrochemical energy conversion and storage devices including fuel cells and lithium-air batteries [1,2]. The fuel cell is a promising energy conversion device due to its high energy density, rapid start-up, and zero emissions [3][4][5]. ORR typically occurs at the cathode of fuel cells heavily loaded with platinum [6,7], but its high cost, low abundance, and instability have made the fuel cell a highly expensive device [8]. Thus, it is essential to develop non-precious metal-based ORR catalysts offering high activity and stability for more rapid dissemination of fuel cells [9]. A variety of materials have been investigated as alternative non-Pt catalysts for ORR, including oxides [10], nitrides [11][12][13], sulfides [14,15], carbides [13,16] of transition metals, metal-nitrogen-carbon catalysts (MNC) [17][18][19], and metal-free catalysts [20,21]. Various transition metal sulfides (TMS) of Mo, Fe, Co, Ni, and V have been explored as ORR catalysts due to their earth-abundance, low cost, and considerable activity [14]. Nanostructured TMS have also been considered to further improve the ORR activity, due to increased number of active sites compared to their bulk counterparts. Another way to improve ORR activity is to combine TMS with carbon supports including carbon nanotube (CNT), graphene (GR), and amorphous carbon [15]. Carbon supports can provide high electrical conductivity and large surface area to disperse TMS, enhancing ORR activity [15,22,23]. They can also act as a growth mediator, reducing TMS particle aggregation. Heteroatom (N and S) doping into the carbon supports further increases active sites and hence ORR activity Graphene oxide (GO) was synthesized by Hummer's method [27] and commercial CNT (CMP-301F, Hanwha Nanotech, Incheon, Korea) was acid treated with 90% nitric acid and 99% sulfuric acid solution, 1:3 v/v ratio, at 393 K for 3 h to eliminate residual metal species before use [28]. Typically, GO (116 mg) and CNT (116 mg) were dispersed in 5 mL ethanol. FeCl 2 ·4H 2 O (1 g) was dissolved in 5 mL ethanol and added to the CNT-GO solution. Thiourea (1.5 g) was added to the solution and the solution was stirred for 1 h. The resulting solution was dried in an oven at 373 K to evaporate ethanol and annealed at 973 K (at a heating rate of 10 • C min −1 ) for 3 h under N 2 atmosphere. The sample was stirred in 0.5 M sulfuric acid solution for 30 min to obtain pure, crystalline phase FeS loaded onto the N,S:CNT-GR support. FeS/CNT and FeS/GR were prepared following an identical method except only CNT or GO was included during the synthesis. Bare FeS was synthesized identically without any carbon support. Nominal weight content of FeS in the supported FeS catalysts was fixed to 50%, and measured weight contents (by inductively coupled plasma optical emission spectroscopy, ICP) were 52, 50, and 54 wt% for FeS/N,S:CNT-GR, FeS/N,S:CNT, and FeS/N,S:GR, respectively. Electrochemical Characterization Electrochemical characteristics were measured in a conventional three electrode cell with N 2 or O 2 saturated 0.1 M KOH solution, using a potentiostat (Ivium technologies, EIN, The Netherlands) equipped with a rotating disk electrode (Pine research, Durham, NC, USA). Ag/AgCl (3 M NaCl) electrodes and Pt wire were used as reference and counter electrodes, respectively. All potentials were referred to the reversible hydrogen electrode (RHE) without specification. Working electrodes were prepared by dispersing 20 mg catalyst in 2 mL water/ethanol solvent (1:1 v/v) and 40 µL 5% Nafion solution, and then 20 µL catalyst slurry was pipetted onto a glassy carbon electrode (0.19635 cm 2 ). Linear sweep voltammetry (LSV) measurements were performed at 5 mV/s scan rate of 1600 rpm, measured after 20 cyclic voltammetry tests from 0 to 1.2 V to stabilize the current. Durability was investigated by subjecting the samples to 6000 cycles of repeated potential ramp from 1.2 to 0 V. Preparation and Physical Chracterizaton of FeS Catalysts Scheme 1 shows the schematic fabrication procedure for bare and carbon-supported FeS catalysts. For FeS/N,S:CNT-GR, iron precursor reacts with ethanol to form Fe-ethoxide and was then mixed with CNT-GO in ethanol solution. Fe-thiourea complex was generated on the CNT-GO support by adding thiourea, and a powder product was obtained by annealing the Fe-thiourea complex/CNT-GO at 973 K for 3 h under flowing N 2 . The annealing produced mixed FeS, Fe x C, and Fe x N crystalline phases ( Figure S1 of Supporting Information), but Fe x C and Fe x N phases disappeared after the acid treatment, leaving pure FeS phase due to better chemical stability of FeS under acid solution than Fe x C and Fe x N. Crystallization of FeS and reduction of GO to GR proceeded simultaneously during annealing. The N and S-doping into CNT-GR supports was also achieved using thiourea as a source of N and S. This synthetic procedure produced FeS nanoparticles with an average size of 24 nm dispersed on CNT-GR supports. Other carbon-supported FeS catalysts, FeS/N,S:CNT, and FeS/N,S:GR, were prepared following the same synthetic method employed with either CNT or GO, exclusively. Bare FeS was also similarly prepared without carbon support. The CNT-GR hybrid support can provide a large surface area for enhanced contact between FeS nanoparticles and electrolyte [12,29]. Electrochemical Characterization Electrochemical characteristics were measured in a conventional three electrode cell with N2 or O2 saturated 0.1 M KOH solution, using a potentiostat (Ivium technologies, EIN, Netherlands) equipped with a rotating disk electrode (Pine research, Durham, NC, USA). Ag/AgCl (3 M NaCl) electrodes and Pt wire were used as reference and counter electrodes, respectively. All potentials were referred to the reversible hydrogen electrode (RHE) without specification. Working electrodes were prepared by dispersing 20 mg catalyst in 2 mL water/ethanol solvent (1:1 v/v) and 40 µL 5% Nafion solution, and then 20 µL catalyst slurry was pipetted onto a glassy carbon electrode (0.19635 cm 2 ). Linear sweep voltammetry (LSV) measurements were performed at 5 mV/s scan rate of 1600 rpm, measured after 20 cyclic voltammetry tests from 0 to 1.2 V to stabilize the current. Durability was investigated by subjecting the samples to 6000 cycles of repeated potential ramp from 1.2 to 0 V. Preparation and Physical Chracterizaton of FeS Catalysts Scheme 1 shows the schematic fabrication procedure for bare and carbon-supported FeS catalysts. For FeS/N,S:CNT-GR, iron precursor reacts with ethanol to form Fe-ethoxide and was then mixed with CNT-GO in ethanol solution. Fe-thiourea complex was generated on the CNT-GO support by adding thiourea, and a powder product was obtained by annealing the Fe-thiourea complex/CNT-GO at 973 K for 3 h under flowing N2. The annealing produced mixed FeS, FexC, and FexN crystalline phases ( Figure S1 of Supporting Information), but FexC and FexN phases disappeared after the acid treatment, leaving pure FeS phase due to better chemical stability of FeS under acid solution than FexC and FexN. Crystallization of FeS and reduction of GO to GR proceeded simultaneously during annealing. The N and S-doping into CNT-GR supports was also achieved using thiourea as a source of N and S. This synthetic procedure produced FeS nanoparticles with an average size of 24 nm dispersed on CNT-GR supports. Other carbon-supported FeS catalysts, FeS/N,S:CNT, and FeS/N,S:GR, were prepared following the same synthetic method employed with either CNT or GO, exclusively. Bare FeS was also similarly prepared without carbon support. The CNT-GR hybrid support can provide a large surface area for enhanced contact between FeS nanoparticles and electrolyte [12,29]. Single carbon support CNT or GO tends to bundle or stack together by itself, significantly limiting the carbon surface to form active sites and thus lowering electrocatalytic activity [30][31][32]. In contrast, the CNT-GR composite support created a three-dimensional open structure, avoiding bundling and stacking [29]. The CNT-GR composite also provides a good electron conducting pathway for the FeS nanoparticles. N and S-doping to carbon supports can enhance the ORR performance by redistributing spin and charge [25,26]. Thus, N,C:CNT-GR could be an effective catalyst support to enhance ORR activity, combining high conductivity and large surface area. Figure 1 shows typical TEM images for prepared catalysts. The TEM image of bare FeS in Figure 1a shows a lattice spacing of 0.299 nm corresponding to the FeS (100) plane. Bare FeS particles were aggregated forming large clusters with approximately 700 nm diameter. In contrast, substantially reduced particle aggregation occurs for carbon-supported FeS catalysts, as shown in Figure 1b-d, with much smaller FeS nanoparticles (20-30 nm) distributed on each carbon support. Metal precursors are attracted by oxygen-containing functional groups within the carbon supports (CNT and GO). Hence FeS particles grow selectively on carbon supports (CNT, GR, and CNT-GR). Strong coupling between FeS particles and carbon supports mitigates FeS nanoparticle aggregation, e.g., Figure 1b shows FeS nanoparticles (28 nm) anchored on CNT without severe aggregation. No free-standing particles occurred, indicating that the CNT support mediated FeS growth and suppressed particle aggregation. Figure 1c shows GR layers with a wrinkled paper-like morphology and FeS nanoparticles (36 nm) distributed on the GR layers. Figure 1d Single carbon support CNT or GO tends to bundle or stack together by itself, significantly limiting the carbon surface to form active sites and thus lowering electrocatalytic activity [30][31][32]. In contrast, the CNT-GR composite support created a three-dimensional open structure, avoiding bundling and stacking [29]. The CNT-GR composite also provides a good electron conducting pathway for the FeS nanoparticles. N and S-doping to carbon supports can enhance the ORR performance by redistributing spin and charge densities [25,26]. Thus, N,C:CNT-GR could be an effective catalyst support to enhance ORR activity, combining high conductivity and large surface area. Figure 1 shows typical TEM images for prepared catalysts. The TEM image of bare FeS in Figure 1a shows a lattice spacing of 0.299 nm corresponding to the FeS (100) plane. Bare FeS particles were aggregated forming large clusters with approximately 700 nm diameter. In contrast, substantially reduced particle aggregation occurs for carbon-supported FeS catalysts, as shown in Figure 1b No free-standing particles occurred, indicating that the CNT support mediated FeS growth and suppressed particle aggregation. Figure 1c shows GR layers with a wrinkled paper-like morphology and FeS nanoparticles (36 nm) distributed on the GR layers. Figure 1d shows a mixed CNT and GR morphology with FeS nanoparticles (24 nm peaks at 30°, 34 and 52° can be indexed to hexagonal FeS (JCPDS 03-065-3408). Carbon-supporte samples show broad peaks at 26°, originating from CNT or GR. As mentioned, imp peaks such as FexC or FexN were not present after acid treatment. Figure 2b shows Raman spectra from the prepared catalysts. Intense peaks oc 1350 (D) and 1580 cm −1 (G) and the numbers indicate their intensity ratios, i.e., ID/IG [33]. The D peak is related to sp 2 ring disorder or defects, and the G peak to first scattering from sp 2 domains E2g mode. ID/IG measures the degree of disorder, whe creased ID/IG implies sp 2 carbon restoration and smaller sp 2 domains due to GO redu [34]. Thus, the increased ID/IG ratio in the FeS/GR (1.191) and the FeS/CNT-GR ( compared with of GO (0.937) verifies thermal reduction of GO to GR during the synt Chemical states of FeS/CNT-GR were investigated by X-ray photoelectron spe copy (XPS). Figure 2c shows high resolution Fe 2p XPS spectra for FeS/CNT-GR peaks centered around 710.1 and 723.5 eV are due to Fe 2+ 2p3/2 and Fe 2+ 2p1/2 of FeS, re tively. The peaks at 713.3 eV (Fe 3+ 2p3/2) and 727.3 eV (Fe 3+ 2p1/2) suggest partial oxid of the catalyst surface. [35,36]. Figure 2d shows N 1s spectra, with peaks at 398.5, 401.1, and 403.8 eV corresponding to pyridinic, pyrrolic, graphitic, and oxidized N cies, respectively [37]. Surface nitrogen content due to N-doping was 6.0 at% from th survey scan. N-doping to carbon improves ORR performance by enhancing electrica ductivity of carbon or increasing defect sites of ORR activity [38,39]. Figure S3a s high resolution S 2p XPS spectra for FeS/N,S:CNT-GR, which could be deconvolute five peaks. Peaks at 161.2 and 162.8eV are related to S 2− 2p3/2 and S 2− 2p1/2 in FeS, re tively; Peaks at 164.5 and 165.5 eV originate from polysulfide S in the carbon plan peak at 168.2 eV is attributed to sulphate (SO4 2− ) species due to acid treatment or p oxidation of sulfide upon air exposure [40,41]. In S 2p spectra of N, S-doped CN Figure 2b shows Raman spectra from the prepared catalysts. Intense peaks occur at 1350 (D) and 1580 cm −1 (G) and the numbers indicate their intensity ratios, i.e., I D /I G ratios [33]. The D peak is related to sp 2 ring disorder or defects, and the G peak to first order scattering from sp 2 domains E 2g mode. I D /I G measures the degree of disorder, where increased I D /I G implies sp 2 carbon restoration and smaller sp 2 domains due to GO reduction [34]. Thus, the increased I D /I G ratio in the FeS/GR (1.191) and the FeS/CNT-GR (1.190) compared with of GO (0.937) verifies thermal reduction of GO to GR during the synthesis. Chemical states of FeS/CNT-GR were investigated by X-ray photoelectron spectroscopy (XPS). Figure 2c shows high resolution Fe 2p XPS spectra for FeS/CNT-GR. The peaks centered around 710.1 and 723.5 eV are due to Fe 2+ 2p 3/2 and Fe 2+ 2p 1/2 of FeS, respectively. The peaks at 713.3 eV (Fe 3+ 2p 3/2 ) and 727.3 eV (Fe 3+ 2p 1/2 ) suggest partial oxidation of the catalyst surface. [35,36]. Figure 2d shows N 1s spectra, with peaks at 398.5, 399.6, 401.1, and 403.8 eV corresponding to pyridinic, pyrrolic, graphitic, and oxidized N species, respectively [37]. Surface nitrogen content due to N-doping was 6.0 at% from the XPS survey scan. N-doping to carbon improves ORR performance by enhancing electrical conductivity of carbon or increasing defect sites of ORR activity [38,39]. Figure S3a shows high resolution S 2p XPS spectra for FeS/N,S:CNT-GR, which could be deconvoluted into five peaks. Peaks at 161.2 and 162.8eV are related to S 2− 2p 3/2 and S 2− 2p 1/2 in FeS, respectively; Peaks at 164.5 and 165.5 eV originate from polysulfide S in the carbon plane; the peak at 168.2 eV is attributed to sulphate (SO 4 2− ) species due to acid treatment or partial oxidation of sulfide upon air exposure [40,41]. In S 2p spectra of N, S-doped CNT-GR made without Fe precursor ( Figure S3b), FeS-related peaks disappeared while the peaks attributed to polysulfide S in the carbon plane and sulphate species were maintained. The results indicate that the thiourea was the sulfur source for FeS crystallization and S-doping to the carbon supports. Figure S4 shows XPS spectra for FeS/N,S:CNT and FeS/N,S:GR. They exhibit similar Fe 2p, N 1s, and S 2p peaks to FeS/N,S:CNT-GR, suggesting similar chemical states of FeS for all the carbon-supported catalysts. Textural properties for the prepared catalysts were investigated by N 2 adsorptiondesorption isotherms, and compared with other TMS/carbon catalysts in Table S1 of Supporting Information. Figure 3a shows that carbon-supported FeS samples exhibited the type IV isotherm with a typical hysteresis of the isotherms, indicating the presence of mesopores, whereas bare FeS does not show clear hysteresis. Brunauer-Emmett-Teller (BET) surface area for bare FeS was 14 m 2 g −1 , whereas FeS/N,S:CNT-GR achieved significantly improved BET surface area of 191 m 2 g −1 after introducing the N,S:CNT-GR support. FeS/N,S:CNT and FeS/N,S:GR also showed increased BET surface areas of 174 and 137 m 2 g −1 , respectively. The larger surface area of FeS/N,S:CNT-GR compared with FeS/N,S:CNT and FeS/N,S:GR was due to a synergy between CNT and GR acting as spacers for each other, alleviating CNT's bundling and GR layers' stacking [12]. Pore size distribution was determined using the desorption isotherm following the Barrett-Joyner-Halenda method. Average pore size for all carbon-supported FeS catalysts was~4 nm (Figure 3b), and pore volume varied as FeS/N,S:CNT-GR (0.5568 cm 3 /g) > FeS/N,S:CNT (0.5067 cm 3 /g) > FeS/N,S:GR (0.4077 cm 3 /g). The large surface area with abundant mesopores will stabilize smaller FeS nanoparticles, leading to improved catalytic activity for ORR [42]. Materials 2021, 14, x FOR PEER REVIEW 6 of 10 made without Fe precursor ( Figure S3b), FeS-related peaks disappeared while the peaks attributed to polysulfide S in the carbon plane and sulphate species were maintained. The results indicate that the thiourea was the sulfur source for FeS crystallization and S-doping to the carbon supports. Figure S4 shows XPS spectra for FeS/N,S:CNT and FeS/N,S:GR. They exhibit similar Fe 2p, N 1s, and S 2p peaks to FeS/N,S:CNT-GR, suggesting similar chemical states of FeS for all the carbon-supported catalysts. Textural properties for the prepared catalysts were investigated by N2 adsorptiondesorption isotherms, and compared with other TMS/carbon catalysts in Table S1 of Supporting Information. Figure 3a shows that carbon-supported FeS samples exhibited the type IV isotherm with a typical hysteresis of the isotherms, indicating the presence of mesopores, whereas bare FeS does not show clear hysteresis. Brunauer-Emmett-Teller (BET) surface area for bare FeS was 14 m 2 g −1 , whereas FeS/N,S:CNT-GR achieved significantly improved BET surface area of 191 m 2 g −1 after introducing the N,S:CNT-GR support. FeS/N,S:CNT and FeS/N,S:GR also showed increased BET surface areas of 174 and 137 m 2 g −1 , respectively. The larger surface area of FeS/N,S:CNT-GR compared with FeS/N,S:CNT and FeS/N,S:GR was due to a synergy between CNT and GR acting as spacers for each other, alleviating CNT's bundling and GR layers' stacking [12]. Pore size distribution was determined using the desorption isotherm following the Barrett-Joyner-Halenda method. Average pore size for all carbon-supported FeS catalysts was ~4 nm (Figure 3b), and pore volume varied as FeS/N,S:CNT-GR (0.5568 cm 3 /g) > FeS/N,S:CNT (0.5067 cm 3 /g) > FeS/N,S:GR (0.4077 cm 3 /g). The large surface area with abundant mesopores will stabilize smaller FeS nanoparticles, leading to improved catalytic activity for ORR [42]. . Bare FeS showed poor ORR activity with an Eonset of 0.8 V and E 1/2 of 0.58 V. Thus, carbon supports enhanced ORR activity substantially, demonstrating their critical importance, providing high conductivity and large surface area for the loaded FeS nanoparticles. Besides, dual N, S-doping to the carbon supports further enhanced the activity due to changing the charge distribution in the carbon framework and improving electrical conductivity compared with undoped carbon supports [25,26,43,44]. These effects were maximized in FeS/N,S:CNT-GR, achieving a top performance compared to previously reported iron or other TMS-based catalysts (Table S2). electrical conductivity compared with undoped carbon supports [25,26,43,44]. These effects were maximized in FeS/N,S:CNT-GR, achieving a top performance compared to previously reported iron or other TMS-based catalysts (Table S2). Figure 4b shows that the current density of FeS/N,S:CNT-GR increases with increasing rotation speed at the same potential due to enhanced oxygen diffusion on the electrode. Figure S5a shows Koutecky-Levich (K-L) plots for FeS/N,S:CNT-GR described as Electrochmical Chracterizaton of FeS Catalysts where J is measured current density; JD is diffusion-limited current density; JK is kinetic current density; ω is electrode rotation speed; Figure 4b shows that the current density of FeS/N,S:CNT-GR increases with increasing rotation speed at the same potential due to enhanced oxygen diffusion on the electrode. Figure S5a shows Koutecky-Levich (K-L) plots for FeS/N,S:CNT-GR described as where J is measured current density; J D is diffusion-limited current density; J K is kinetic current density; ω is electrode rotation speed; is the K-L plot slope, where n is electron transfer number, F is Faraday's constant (96,486 C mol −1 ), C 0 is bulk concentration of oxygen in 0.1M KOH solution (1.2 × 10 −6 mol cm −3 ), D is diffusion coefficient of oxygen in 0.1M KOH (1.9 × 10 −5 cm 2 ·s −1 ), and υ is kinetic viscosity of oxygen in 0.1M KOH (1.0 × 10 −2 cm 2 s −1 ) [45]. The obtained electron transfer numbers are close to 4.0 ( Figure S5b), indicating the dominant four electron ORR catalytic pathway proceeds for the FeS/N,S:CNT-GR catalyst. Figure 4c shows LSV curves for FeS/N,S:CNT-GR after 6000 potential cycles between 1.2 and 0 V. Activity loss for FeS/N,S:CNT-GR was marginal with slightly decreased E 1/2 = 0.821 V from 0.827 V, whereas commercial Pt/C recorded a 30 mV decrease in E 1/2 . Similarly, the current loss for the FeS/N,S:CNT-GR was~6% after continuous operation for 10,000 s at 0.7 V (Figure 4d), whereas commercial Pt/C showed much faster current decay of 27%, indicating that our FeS/N,S:CNT has a high ORR activity comparable to Pt/C, and much better ORR stability than Pt/C. This excellent ORR activity and stability of the FeS/N,S:CNT-GR catalyst could arise from synergy between small FeS nanoparticles and the N, S dual-doped CNT-GR support. The N,S:CNT-GR support provides a high electrical conductivity and a large surface area, improving FeS activity and electrolyte contact to active sites (FeS). Indeed, in Figure S6, FeS/N,S:CNT-GR achieved considerably higher double layer capacitance C dl of 36.6 mF/cm 2 compared with FeS/N,S:GR, FeS/N,S:CNT, and bare FeS (C dl values of 10.5, 24.6, and 1.16 mF/cm 2 , respectively) [46]. The C dl value is proportional to contact area between electrode and electrolyte (or electrochemical surface area, ECSA), hence the N,S:CNT-GR support efficiently increases the contact area, alleviating CNT bundling and GR layers stacking [12,39]. Furthermore, simultaneous N and S-doping to CNT-GR was easily achieved by employing thiourea, improving the ORR performance by redistributing spin and charge densities [21,24]. The CNT-GR support also mediates FeS growth, reducing particle aggregation and further increasing FeS reaction sites. Figure S7 shows that the N, S dual-doped CNT-GR prepared by identical synthetic methods without Fe precursor has its own ORR activity, achieving E 1/2 of 0.762 V. Hence, combining it with FeS considerably improved ORR activity of the FeS/N,S:CNT-GR catalyst, indicating a synergy between FeS and N,S:CNT-GR. Thus, considering the facile synthetic method and high ORR performance, FeS/N,S:CNT-GR catalysts constitute a potential candidate to substitute commercial Pt/C catalyst. Conclusions This work successfully prepared a non-precious metal catalyst for ORR comprising FeS nanoparticles dispersed onto N, S dual-doped CNT-GR composite supports through a simple annealing and acid treatment, achieving simultaneous FeS formation and N, S dual-doping into CNT-GR. The synthesized FeS/N,S:CNT-GR catalyst exhibited the highest ORR performance among prepared FeS-based catalysts with a small E 1/2 value of 0.827 V, comparable to commercial Pt/C. Improved ORR performance was attributed to a synergy between the small FeS nanoparticles with high activity and N, S dual-doped CNT-GR support providing improved high electrical conductivity, large surface area, and its own ORR performance caused by modified electronic structure by the dual doping. Thus, N,S:CNT-GR composite support mediates FeS growth, reducing particle aggregation, and further increasing reaction sites. This FeS/N,S:CNT-GR catalyst offers a potential non-precious metal ORR catalyst of a high activity with a good stability. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/ma14092146/s1, Figure S1. XRD patterns of FeS/N,S:CNT-GR catalyst before and after acid treatment., Figure S2. EDS elemental mapping images of FeS/N,S:CNT-GR for C, Fe, S, and N., Figure S3. XPS S 2p spectra of (a) FeS/N,S:CNT-GR and (b) N,S:CNT-GR., Figure S4. Table S1. Comparison of BET surface area, pore volume, and average pores size of the prepared catalysts with TMS-based electrocatalysts., Table S2
Martini: using literature keywords to compare gene sets Life scientists are often interested to compare two gene sets to gain insight into differences between two distinct, but related, phenotypes or conditions. Several tools have been developed for comparing gene sets, most of which find Gene Ontology (GO) terms that are significantly over-represented in one gene set. However, such tools often return GO terms that are too generic or too few to be informative. Here, we present Martini, an easy-to-use tool for comparing gene sets. Martini is based, not on GO, but on keywords extracted from Medline abstracts; Martini also supports a much wider range of species than comparable tools. To evaluate Martini we created a benchmark based on the human cell cycle, and we tested several comparable tools (CoPub, FatiGO, Marmite and ProfCom). Martini had the best benchmark performance, delivering a more detailed and accurate description of function. Martini also gave best or equal performance with three other datasets (related to Arabidopsis, melanoma and ovarian cancer), suggesting that Martini represents an advance in the automated comparison of gene sets. In agreement with previous studies, our results further suggest that literature-derived keywords are a richer source of gene-function information than GO annotations. Martini is freely available at http://martini.embl.de. INTRODUCTION High-throughput experiments such as microarrays, mass spectrometry, or automated digital microscopy often produce a single list of genes associated with a specific phenotype or condition, and many computational tools have been developed to help biologists use such a list to gain insight into the underlying biological processes (1). Some of these tools even allow end-users to interactively explore gene sets, and to identify functional sub-clusters (2,3). While a single gene set is probably the most common outcome of a single experiment, scientists are often interested to compare two sets defined by two distinct, but related, phenotypes or conditions. For example, a scientist may want to compare the set of genes associated with a primary cancer versus those associated with the metastatic form of the same cancer (4). Alternatively, a scientist might want to compare genes associated with a disease to the genes associated with the presence of a drug. For this article, we briefly reviewed the available tools for analyzing gene lists; we found that most tools allow only a single input gene list, which is usually compared with the background of all remaining genes in the same organism. Only a subset of tools allow the user to explicitly ask the more specific question: 'how do two gene sets differ?' Clearly, the ability to answer this question is important and relevant to many life scientists today. Of the tools that do address this question [e.g. FatiGO (5) and ProfCom (6)], most are based on GO (7), a controlled vocabulary of $30 000 terms for describing gene function. GO-based tools find GO terms that are significantly over-represented in one set of genes versus a second reference set. However, dependency on GO gives rise to some limitations (8). For many genes, GO annotations give a very incomplete description of function, e.g. human genes in Entrez (http://tinyurl .com/entrez-gene) have a median of only seven GO terms. As a result, GO-based tools sometimes produce disappointing results, finding only a few, or only very general, GO terms (e.g. see 'Results' section). An alternative approach is to examine the literature cited in each gene entry, and extract keywords that can be used to describe gene function. In most cases, the literature associated with a gene gives a much richer description of function than the currently available GO terms. For example, human genes in Entrez have a median of nine Medline (http://pubmed.org) abstracts; clearly, nine abstracts will contain more information than just seven GO terms, although the exact number of keywords *To whom correspondence should be addressed. Email: sean.odonoghue@embl.de extracted per gene will depend on the size and scope of the keyword dictionary used. Indeed, in at least one previous study it has been reported that literature-based approaches give more sensitive and more specific results than using only GO terms (9). Several such keyword-based methods have been described in the literature (9)(10)(11)(12)(13)(14), however, we could only find two systems that are provided as automated, freely available services, namely CoPub (14) and Marmite (13). CoPub is based on a dictionary of 250 000 keywords, including gene and pathway names, GO terms, diseases, drugs and tissues. CoPub can only analyze a single gene set, and it is further restricted to only human, mouse or rat genes. Marmite is based on three types of keywords, namely diseases, chemicals and 'word roots', or generic 'bio-terms'. Marmite can compare two gene sets, but is restricted to human genes only. In this work we present Martini, a new, easy-to-use tool that allows end-users to compare two gene sets using a sensitive, keyword-based method. Martini can be used with genes from a large number of species, and it uses a rich keyword dictionary of over 3 million terms, including gene names, drugs, chemicals, diseases, symptoms, organisms and processes/bio-actions. Input data and initial processing Martini requires the end-user to specify two input sets. By default Martini assumes the input sets are lists of Entrez gene IDs (http://tinyurl.com/entrez-gene), in which case Martini retrieves, for each gene, all PubMed IDs (http://pubmed.org) that are referred to in the Entrez entry including the GeneRIFs and interaction records. The mapping from genes to abstracts is retrieved offline using SRS (15) and stored in random access memory (RAM) to enable fast access while processing jobs. Any gene IDs that occur in both input sets are ignored for the purpose of subsequent analysis. Alternatively, the end-user can specify a PubMed query, in which case Martini queries PubMed via Entrez Programming Utilities (http://tinyurl.com/eutils-help) and retrieves a list of PubMed IDs. As a third alternative, Martini allows the end-user to specify a list of PubMed IDs directly as input. Thus, for each of the three different types of input to Martini, the initial processing results in a list of PubMed IDs. The next step is to convert each PubMed ID into a list of keywords. For this, we used the AKS2 literature analysis tool (http://tinyurl.com/bioalma-aks2), which is based on a keyword dictionary of over 3 million entries covering the following types: drugs, chemicals, genes, diseases, symptoms, bio-actions and other generic biologically-relevant keywords. In AKS2, this dictionary has been used to pre-tag keywords in Medline abstracts, resulting in an average of 32 keywords per abstract. In Martini, we extract this information into a hash table, linking each Medline abstract to a list of AKS2 keywords. By default, Martini uses all keyword types (genes, drugs, diseases, etc.), however the user can choose to exclude some types via the 'Advanced' option. Martini relies on the literature that is linked to Entrez gene entries. In some cases, the Entrez gene entries have no associated abstracts. In other cases, some Medline entries contain only titles not abstracts. Another potential limitation is that AKS2 indexes only the latest 9 million Medline abstracts ignoring older entries. In addition, to reduce server load and processing time, the total number of entries in each input field of Martini is limited to either 25 000 genes or abstracts-if the user specifies more than this limit, the job will not run, and the user will be asked to reduce the size of the input sets. Keyword enhancement After the initial processing described above, Martini analyzes each keyword separately to test for statisticallysignificant over-representation in one input set compared with the other set, using the two-tailed version of Fisher's exact test (16). If the user specified either a list of PubMed IDs or a query, Martini counts the numbers of abstracts in which the keyword occurs at least once. Alternatively, if the user specified genes as input, Martini counts the numbers of genes in which the keyword occurs at least once in any of the abstracts associated with each gene. To account for the total number of keywords tested, we used the false discovery rate method of Benjamini and Hochberg (17) with a, the fraction of false positives considered acceptable, set to 5%. The Fisher p-value for each keyword was then adjusted using: where n is the total number of entities in a set, and k is the rank of the largest p-value that satisfies the false discovery criteria, calculated separately for keywords associated with each of the two input sets. By default, Martini returns only those keywords for which the adjusted p-value is <5%. However, Martini also provides users access to all keywords found, including those with higher p-value. The Benjamini and Hochberg method assumes that all p-values are mutually independent, which is clearly not true since some words are very likely to appear together. However the method errs on the conservative side, hence we end up rejecting more words than we should. Ideally we would instead use a permutationbased approach, as some authors have in similar cases (18). However, currently this would not be feasible for Martini as it would require a significantly longer response time. Comparisons with similar tools We surveyed the literature for methods that can compare two gene lists; some of these methods have not been made available as free tools or services, and others were once available, but are no longer working. Several of the available tools have a rather complex user interface; these tools may have rich functional capabilities, but they do not provide end-users with a simple way to compare two gene sets. We found three tools that we considered to be comparable with Martini, namely FatiGO (5), Marmite (13) and ProfCom (6). For testing these tools we used default parameters, except for FatiGO, where we used all available database sources (GO terms, pathway names, etc.). We also tested CoPub (14), which uses a similarly rich keyword dictionary to Martini, but cannot compare two genes lists: instead CoPub effectively compares one list to the background of all genes from the same organism. However, we included results from CoPub for one dataset (cell cycle, see below) primarily to illustrate the benefit of using two datasets. For CoPub, we again used default parameters and the following search categories: 'biological processes', 'Pathway', 'Drug' and 'Disease'. In assessing the output of these tools, we attempted to manually assign each keyword or GO term into one of three categories: 'true positive', 'false positive' or 'uninformative'. The criteria we used for these assignments are as follows: True positives were defined as terms that refer to processes or entities that are unambiguously correct, given the biological context of the dataset, determined by a manual literature search. False positives were terms that refer to processes or entities that are unambiguously incorrect, given the biological context. Finally, uninformative terms were simply those that are not clearly right (true positive) and not clearly wrong (false positive). Datasets To compare our work with other tools, we used several datasets described in this section-these datasets are also available at http://martini.embl.de. Arabidopsis. To create a simple test dataset, we used the Arabidopsis Information Resource, TAIR (19), to find 269 Arabidopsis genes that matched the term 'disease resistance'. We randomly selected a further 514 Arabidopsis genes that did not match this search term. Cell cycle. This dataset consisted of 600 periodically expressed human genes identified by Jensen et al. (20) from the original dataset of Whitfield et al. (21). Based on when in the cell cycle each gene is most highly expressed, Jensen et al. (20) assigned each gene to a specific 'peak time', expressed as a percentage of cell-cycle progress, with 100% (equivalently 0%) corresponding to the moment of cell division. To divide this dataset into two input sets (A and B), we used a window of width 10% and slid this window in steps of 1% around the cell cycle. For example, genes occurring from 1 to 10% of the cycle were assigned to set A, and the remaining genes from 11 to 100% were assigned to set B. Next, genes from 2 to 11% were assigned to set A, and so on. In addition, we partitioned the 600 genes into four subsets corresponding to the classic cell-cycle phases and used these subsets for a four-state comparison. Melanoma. This dataset consisted of 290 genes highly expressed in metastatic melanoma, and 899 genes highly expressed in primary melanoma based on microarray analysis (4). Ovarian cancer. This data set consisted of 160 genes associated with clear-cell ovarian cancer, and 105 genes associated with non-clear-cell ovarian cancer, which includes serous and endometrioid ovarian cancers grouped together (22). Cyclic keyword layout Keywords and GO terms determined using the cell-cycle dataset were arranged in a circle using a layout algorithm developed for this work, and written in Mathematica (23). The algorithm first places each word along a circular arc that spans the exact region of the cell cycle in which the word is significantly over-represented. Next, the algorithm determines the radius at which to print each word. This is determined primarily based on the character density, i.e. number of characters in each keyword divided by the arc length. Thus, shorter words are placed closer to the center. Finally, the radial position for some words is modified slightly to avoid overlaps with neighboring words. Figure 1 shows the output of a typical keyword analysis with Martini. In this case, Martini was given two input sets of genes-the first set contained 269 Arabidopsis genes known to be associated with disease resistance mechanisms; the second set consisted of 514 genes with no clear link to disease. Martini found 60 keywords that were significantly over-represented in either of the two input sets (Figure 1). Manually checking each keyword, we considered the majority (48 out of 60) to be true positives, i.e. to be clearly related to disease resistance mechanisms in plants. For example, Pseudomonas is a common plant pathogen, and salicylic acid is a phytohormone that is used by plants in triggering the defense-signaling pathway. Arabidopsis dataset The 12 keywords that were not true positives were: access, allele, cause, cognate, cross, enzyme, experiment, gene product, nucleotide, selected, situation and ursus sp. We considered that none of these satisfied the criteria for false positives (see 'Methods' section), hence we classified them as 'uninformative'. Most of these 12 are too generic to be properly considered as 'keywords', and in future versions of Martini we plan to automatically blacklist such uninformative terms. For comparison, the Arabidopsis datasets were also analyzed using FatiGO, Marmite and ProfCom, and in each case exactly zero terms were found. Table 1 shows the time taken for Martini keyword enhancement. Generally, the time taken scales better than linearly with input size, however datasets involving many well-studied genes will be slower than this estimate. Cell-cycle dataset We next tested the keyword enhancement feature of Martini on a set of 600 human cell-cycle-regulated genes (20). The human cell cycle is relatively well-studied and understood, and many of the genes in this data set are well-characterized (98% are linked to Medline abstracts describing their function and 86% have GO annotations levels 3-9 in the GO ontology). Thus, we may expect not only that methods such as Martini should perform well with these data, but also that this set may be a good benchmark, since it should be straightforward to assess the accuracy of the resulting keywords and GO terms. Each of these 600 genes has been assigned to a specific time point within the cell cycle at which the gene is maximally expressed (20). These time points are given as a percentage of cell-cycle progress rather than hours since the cycle duration varies between growth conditions. To construct pairs of gene sets, we used a sliding window spanning 10% of the cell cycle, and we compared all genes within the window with the remaining cell-cycle genes. Sliding the window in 1% steps, we generated 100 Martini keyword analyses spanning the entire cell cycle. In Figure 2, these results are arranged in a cyclic layout (see 'Methods' section), where each keyword has been placed to show the exact region of the cell cycle where the keyword is significantly over-represented. The keywords cluster into three distinct groups: (i) a prereplication phase (late G1, corresponding to cell-cycle progress from 41 to 52% in Figure 2) defined by keywords that describe the initiation of DNA replication and the checkpoints that can prevent initiation from taking place; (ii) S-phase (53-63%), defined by keywords that describe the proteins, complexes and processes associated with the replication machinery; (iii) M-phase (79-100%), which has no keywords for proteins or complexes, but has keywords that describe the cell division sub-processes. In G1 and G2 phase (1-40% and 64-78%, respectively) no enhanced keywords are seen, consistent with the generally-accepted belief that relatively few processes are specific to these 'gap' phases. Assessed qualitatively, Figure 2 shows a surprisingly accurate and precise match to the events and entities Figure 1. Martini keyword output for the Arabidopsis dataset. All significantly enhanced keywords are shown first as a 'keyword cloud', where the size of each keyword is proportional to its statistical significance. The keywords assigned to input sets A or B are colored blue or black, respectively. Below the keyword cloud, the significant keywords are shown again in a table form, including: the number of times each keyword occurs in each set; the enhancement factor (i.e. the ratio of the previous numbers); finally, the table gives an adjusted p-value, which is an estimation of the likelihood that the given level of keyword enhancement occurred by chance. Note that the total number of genes or abstracts shown in this table may be slightly less than the number in the user-defined input. This may happen for two reasons: first, depending on the user's choice of genes or abstracts as input, Martini will remove common items; secondly, some abstracts may not have been indexed in AKS2, and hence they are not counted. This table can be used to estimate the time required for a Martini analysis, assuming linear scaling with total input size. For example, to perform a keyword enhancement using two sets of 500 genes (=total input of 1000 genes) takes $20 min, i.e. 10 times longer than for 100 genes. The estimates given here are for genes with nine Medline abstracts (i.e. the median number for human genes). Scaling can be highly non-linear, e.g. including well-studied genes can take much longer. However, in practice the actual time taken is often less than the time estimated from this table. known to occur at different stages of the cell cycle. Of the 72 total keywords found by Martini, we considered 67 to be 'true positives', i.e. to occur at the correct position in the cell cycle. The remaining five keywords-'874 Amino Acids', 'Extractable', 'Femtomole', 'Tungsten', '20 specific protein'-we would classified as 'uninformative' rather than 'false positives', since these keywords do not imply incorrect processes or entities. To quantitate the accuracy and precision of the keywords and terms, we divided the 600 genes into four groups corresponding to the classic phases G1 (cell cycle progress from 1 to 40% in Figure 2, giving 113 genes), S (41-63%, 154 genes), G2 (64-78%, 82 genes) and M (79-100%, 251 genes). These gene sets were then used to perform a much simpler four-step analysis, shown in Table 2, where we compared the genes in each phase with those in the other three phases (e.g. G1 versus S+G2+M, etc.). For each of the tools, we then manually classified each term found as either true positive, false positive or uninformative using the following criteria: True positives are keywords that have definitely been assigned to the correct cell-cycle phase, i.e. they match to processes or entities known to occur specifically within that phase. False positives are keywords that match to cell-cycle processes, but have definitely been assigned to the incorrect phase, e.g. FatiGO finds the term 'M phase' associated with G1 genes. Since the dataset was defined as genes specific to the mitotic cell cycle, we considered any meiosis-specific keywords to be false positives. Finally, uninformative keywords are those that are not clearly right (true positive) and not clearly wrong (false positive). CoPub cannot compare two lists, and the results shown were generated effectively by comparing each of the four gene subsets against the background of all other human genes. As expected, CoPub gives less precise results with more false positives. In fact due to space limitations in Table 2, we show only 'biological processes' from CoPub; including the other CoPub categories ('drug', 'pathway', 'disease' and 'liver pathology') gives nearly twice as many keywords with a similar pattern of true and false positives. Some of the keywords we classified as uninformative could arguably be regarded as false positives. For example, CoPub finds 'G2 checkpoint' and 'G2/M checkpoint' associated with M-phase genes, however, since these terms describe a process happening between two phases, in this simple four-state analysis, we considered such terms to be neither clearly right or wrong. Similarly, the Rb:E2F-1:DP-1 transcription factor found . Uninformative: Biopolymer metabolic process; cell cycle, phase, process; cellular metabolic process; cytokine production; cytoskeleton, organization and biogenesis; macromolecule metabolic process; microtubule cytoskeleton organization and biogenesis; nucleic acid binding; primary metabolic process; response to DNA damage stimulus; response to endogenous stimulus; response to stress. - The output of different tools applied to four gene sets, corresponding to the cell-cycle phases G1, S, G2 and M. Each output term was categorized as 'true positive', 'false positive' or 'uninformative'. Superscripts indicate matches to the cell-cycle benchmark (Table 3). Qualitatively, Martini gave the best performance. by FatiGO belongs to the switch from G1 to S phase. Terms such as 'cell cycle', 'cell cycle checkpoint' and 'hydrolase' are not incorrect, but since they refer to processes throughout the entire cycle, it is also not correct to assign them to a single cell-cycle phase. Another borderline case is 'DNA damage', which is a key feature of S-phase, but is also present in other phases, hence we regarded it as a true positive if it occurs in S-phase, but as uninformative for other phases. CoPub also finds terms such as 'lung development' that appear to be incorrect, given how the gene set was defined, however since this term does not clearly match to any specific cell-cycle process, we categorized it as 'uninformative'. To calculate a recall score, we created a benchmark or 'score card' that defines 20 main phases, sub-processes and key components in the human cell cycle (Table 3). Each true positive in Table 2 was then mapped onto one row of Table 3, allowing us to count non-redundant true positives (tp), and also to count false negatives (fn, i.e. rows in Table 3 for which a tool has no matching keywords). The recall was then calculated as tp/ (tp + fn), and the precision calculated as tp/(tp + fp), where fp stands for the number of false positives in Table 3. Note that the number of false positives has no clear limit, hence the precision score used here is an estimate of the 'true' precision. Of the five tools tested against this benchmark, Martini clearly gave the best performance, with 60% recall and 100% precision. CoPub found many more keywords and had similarly good recall (60%), but only 17% precision (i.e. many false positives). FatiGO also found more keywords than Martini, but had lower recall (25%) and lower precision (45%). Marmite found zero terms in all of the phases, while ProfCom found only the single term 'hydrolase activity' that we judged to be uninformative. Melanoma dataset We next tested keyword enhancement using two gene sets, one associated with primary melanoma and another with metastatic melanoma (4). In contrast to the very specific comparisons of cell-cycle phases in Table 2, comparing these two types of melanoma corresponds to asking a more general question. We considered the melanoma dataset to be not a good candidate as a benchmark, unlike the cell-cycle dataset, but probably a more realistic or typical case. Table 4 compares the output of FatiGO, Marmite, Martini and ProfCom with these data. We manually classified each keyword found as either mitosis-related, uninformative, or 'not mitosis-related' using the following criteria (different to the cell-cycle criteria). Mitosis-related keywords have a clear relation to the major mitosisspecific processes. Since mitotic cell division is what we would expect to see associated with metastatic cancer, we considered these keywords to be true positives. Uninformative keywords were either too generic (e.g. 'assemblies' or 'biogenesis'), or related to experimental techniques (e.g. 'co-immunoprecipitation'), or related to model organisms ('cerevisiae' or 'sporulation'). Any remaining keywords were classified as Not mitosisrelated. Keywords in this final category are the most interesting as their connection to melanoma and metastasis is, in many cases, not immediately obvious. In contrast to Arabidopsis and the human cell-cycle, where many of us have extensive experience, we had little previous experience with the melanoma literature, and hence we were less confident in deciding true and false positives. Martini found 264 significantly-enhanced keywords, a much larger number than the other methods (Table 4). Of the keywords found by Martini, 109 were mitosisrelated and 79 were uninformative. This left 76 keywords assigned as 'not mitosis-related'; for each of these we manually checked the literature for evidence of a connection to melanoma or metastasis. For some keywords, this connection was straightforward, e.g. skin, cornea, lymphoid, HeLa cells, desmosome, intermediate filaments, involucrin, calcium, as well as several skin diseases. For other keywords, the connection was less obvious, but was supported by the literature: e.g. polyploidy (24), cornification and bone marrow cells (25), heat-shock/ chaperone proteins (26), cystic fibrosis (27), ATM kinases (28), CHK1 (29), neurites (30). Perhaps the most interesting keywords found by Martini are the names of several of the MAGE (melanoma-associated genes) gene family. These genes are normally expressed only in developing sperm, where they play a role in meiotic cell division. However, these genes are also expressed in melanoma (31,32). FatiGO found 4 transcription factors and 47 GO terms, of which 36 were classified as not mitosis-related (Table 4). As with Martini, all the terms in the 'not mitosis-related' category seemed to have a general connection to melanoma or metastasis, hence none were obviously false positives. Interestingly, FatiGO does not find the link to spermatogensis. Comparing Martini and FatiGO qualitatively, both seemed to have similar precision with this dataset, i.e. all terms and keywords found were either uninformative or, as best as we could judge true positives, correctly indicating a connection to melanoma or metastasis. Martini found many more keywords, more-specific keywords and also more uninformative keywords. Martini also found many processes related to melanoma and metastasis that were not found by FatiGO. Thus, we conclude that Martini had qualitatively a somewhat higher recall, however, unlike the human cell cycle, we cannot quantify this since we do not have the background to construct a benchmark covering all the major processes and components involved. Marmite and ProfCom did not perform well with this dataset, finding almost no terms (Table 4). Ovarian cancer dataset As a final test of keyword enhancement, we used FatiGO, Marmite, Martini and ProfCom to compare a set of 160 genes associated with clear-cell ovarian cancer (i.e. cells that are clear when viewed through a microscope), and a second set of 105 genes associated with non-clear-cell ovarian cancer. For this comparison, each of the tools found exactly zero significantly enhanced keywords or GO terms. DISCUSSION Which tool best predicts function? In this work we have compared Martini with several other tools with similar functionality, and overall Martini performs favorably for the specific cases we tested. However, comparing such tools is complex and multifaceted. Many criteria need to be considered making it difficult to judge which tool is the 'best', for example, some end-users may prefer tools that offer advanced features and functionality, even though these tools may take longer to learn. Martini offers fewer advanced features than many other tools, since we designed it for end-users who require a simple, easy-to-use tool that gives quick insight into the functional differences between two gene lists. Another important criterion is the range of organisms that a tool can be applied to. Most of the tools we could find that are comparable to Martini (e.g. FatiGO, Marmite and ProfCom) can work only with genes from human plus a small number of model organisms. Martini has a much greater range, since it can be used with any organism in the Entrez gene database. Furthermore, Martini even allows comparison of genes across different organisms, whereas almost all similar tools usually restrict comparisons to genes within the same organism. Finally, accuracy is clearly a very important criterion for assessing tools such as Martini. Unfortunately, accuracy can be difficult to assess objectively and to quantify reliably. What is required is a set of reliable benchmarks tailored specifically for comparing two gene sets, ideally spanning a wide range of functions and organisms. In this article, we have taken a step in this direction by proposing one such benchmark ( Table 3) that can be used with gene sets related to the human cell cycle (see 'Results' section). Since the human cell cycle is very well-studied, this benchmark probably represents a 'best-case', and the performance of such tools is likely to be worse for most other datasets (e.g. for the ovariancancer dataset, none of tools tested could find any keywords or GO terms). We designed this benchmark to cover only the 20 most important phases, sub-processes and components in the cell cycle; however, as tools improve it would eventually be useful to create a more In this table, different tools have been used to compare a set of genes associated with primary melanoma, and a second set of genes associated with metastatic melanoma. Each keyword or GO term found has been classified as either mitosis-related, uninformative, or 'not mitosis-related'. Compared with the other tools, Martini found more keywords, more-specific keywords, but also more uninformative keywords. detailed, fine-grained benchmark. Currently, the best performing tools reach only 60% recall (Table 3), indicating that there is considerable scope for improving such tools. For the benchmark, we also included CoPub, even though this tool is not really similar to the others tested here, since it cannot compare two gene lists. Given this, CoPub performs rather well in the benchmark, with equalbest recall. The lower precision (i.e. more false positives) obtained by CoPub illustrates the benefit of the two-set approach. Using the cell-cycle benchmark, Martini had markedly better performance compared with the other tools we tested (Table 3). In addition, assessed qualitatively, Martini also had better or equal accuracy for each of the other datasets presented here (Arabidopsis, melanoma and ovarian cancer). Taken together, these results suggest that Martini represents an advance in the state-of-the-art in automated comparison of gene sets. Once published, we await further feedback from endusers applying Martini to wider range of cases to see if this trend still holds. Keywords versus GO terms In our initial survey of tools for gene set analysis, we found that almost all such tools rely on GO terms, often exclusively (see 'Introduction' section), and only a small number of methods used keywords. This suggests a perception amongst many scientists in this field that GO terms are the preferred, more reliable source of functional annotation for genes. Indeed, when we shared the results presented here, many of our colleagues found it striking that the GO-based tools (ProfCom and FatiGO) performed in some cases much worse than a keywordbased tool such as Martini. Summarizing the performance of the tools with the datasets we tested, we conclude that Martini performed best, followed by FatiGO, then Marmite and ProfCom, both performing similarly. Since FatiGO and ProfCom are both GO-based, and since Martini and Marmite are both based on similar keyword dictionaries, it is likely that the poorer performance of Marmite and ProfCom arises from the statistical methods used. However, based on the performance difference between Martini and FatiGO, plus the relatively good performance of CoPub (Table 2), we conclude that keywords may be a richer source of functional annotations than GO terms. Since this runs contrary to the expectation of many scientists in the field, we decided to survey the density of gene annotations from GO terms versus keywords. As reported in the 'Introduction' section, in Entrez we found that the median numbers of GO terms and Medline citations per human gene are seven and nine, respectively. Using the AKS2 keyword dictionary, we get 32 keywords per abstract, and hence a median value of over 100 unique keywords per gene. For well-studied genes, the contrast between GO terms and keywords is even stronger, e.g. the Entrez entry for human p53 has 74 GO terms compared with 2527 Medline abstracts, which give rise to over 11 000 unique keywords using AKS2. The appeal of GO terms likely derives from the use of a controlled vocabulary, as well as the fact that annotation of gene function using GO terms has been done rather systematically. In contrast, extracting keywords automatically from Medline abstracts could be expected to be time-consuming, noisy and error-prone. However, both Martini and CoPub demonstrate the feasibility of a keyword-based approach. Furthermore, in agreement with Ku¨ffner et al. (9), we find that keywords appear to give consistently better, and more specific results than GO terms. Tips for using Martini In this section, we discuss some practical tips and issues for end-users planning to use Martini to compare gene sets. First, we would advise end-users not to have too high expectations when using any automated method to infer function. Like all such methods, Martini does not always produce good results. Martini depends entirely upon the underlying literature associated with the genes in the input sets: it may often occur that there is relatively little literature, or that the literature does not adequately describe the differences between the two genes sets. In the results, we presented one such case-the ovarian cancer dataset-where all of the tools tested produced exactly zero results. Secondly, our experience suggests that best results are often obtained by asking very specific questions, i.e. by comparing two closely related datasets. For example, in Table 2 we used CoPub to compared four sets of $150 genes, on average, with the background of the remaining $20 000+ human genes; this produced good recall but with many false positives, hence low precision. With Martini, we got better results by asking a more specific question, i.e. by comparing the $150 gene sets for each cell-cycle phase against the remaining $450 genes associated with the other phases (Table 2). In fact, as can be seen by comparing the Martini keywords in Table 2 with those in Figure 2, we got even better results (many more keywords and more specific keywords) with the same dataset, but asking an even more specific question, e.g. comparing on average $60 genes in each 10% sub-region of the cell cycle to the $540 genes in the remaining 90%. Thirdly, in cases where only a single gene set is available, one strategy is to construct a second reference gene set by randomly selecting a subset of genes from the same organism. We suggest using a reference set that is several times larger than the experimental set, and choosing genes that each have greater than the median number of abstracts for that species: for human, this means genes with more than nine abstracts. We used a similar strategy for the Arabidopsis dataset, and in this case it produced good results. However, we stress again that there can be no guarantee of producing informative results with automated methods such as Martini. Fourthly, an alternative strategy in the case of a single experimental set is to use a tool such as Anni (3) or TXTGate (2) that can interactively divide a single gene set into functionally related sub-clusters. These subclusters can then be compared using Martini. Finally, to analyze the keywords produced by Martini, we recommend the strategy adopted for the melanoma dataset (Table 4) i.e. divide the keywords into three groups: (i) Keywords that are obvious, given the biological context. (ii) Keywords that are uninformative (e.g. keywords such as 'surprisingly'): Martini sometimes produces many of these (e.g. Table 4), partly due to its large keyword dictionary. Such generic keywords are usually more annoying than troublesome, and we plan to blacklist many of them in the future. (iii) The remaining keywords are often the most interesting, and are most likely to give novel insight into the functional differences between the two gene sets. In some cases, Martini produces a list of keywords that is very large. To help such cases, in the future, we plan that the output list of keywords will be automatically organized into similar biomedical concepts. CONCLUSIONS Martini is designed to be fast and easy-to-use, providing a quick first insight into the functional difference between two gene sets. Our results suggest that Martini offers a significant advance in the automated extraction of biological knowledge from sets of genes or abstracts. Currently, Martini focuses on finding differences between two input sets; in the near future we plan to add an option to search instead for commonalities between these sets, for example to find interactions involving genes from both sets. In addition, we plan to improve Martini by adding document classification, by enabling the input of single gene-lists, and by using sequence alignment tools to extend functional annotation to similar sequences.
Deep moir\'e potentials in twisted transition metal dichalcogenide bilayers In twisted bilayers of semiconducting transition metal dichalcogenides (TMDs), a combination of structural rippling and electronic coupling gives rise to periodic moir\'e potentials that can confine charged and neutral excitations. Here, we report experimental measurements of the structure and spectroscopic properties of twisted bilayers of WSe2 and MoSe2 in the H-stacking configuration using scanning tunneling microscopy (STM). Our experiments reveal that the moir\'e potential in these bilayers at small angles is unexpectedly large, reaching values of above 300 meV for the valence band and 150 meV for the conduction band - an order of magnitude larger than theoretical estimates based on interlayer coupling alone. We further demonstrate that the moir\'e potential is a non-monotonic function of moir\'e wavelength, reaching a maximum at around a 13nm moir\'e period. This non-monotonicity coincides with a drastic change in the structure of the moir\'e pattern from a continuous variation of stacking order at small moir\'e wavelengths to a one-dimensional soliton dominated structure at large moir\'e wavelengths. We show that the in-plane structure of the moir\'e pattern is captured well by a continuous mechanical relaxation model, and find that the moir\'e structure and internal strain rather than the interlayer coupling is the dominant factor in determining the moir\'e potential. Our results demonstrate the potential of using precision moir\'e structures to create deeply trapped carriers or excitations for quantum electronics and optoelectronics. In twisted bilayers of semiconducting transition metal dichalcogenides (TMDs), a combination of structural rippling and electronic coupling gives rise to periodic moiré potentials that can confine charged and neutral excitations [1][2][3][4][5][6][7]. Here, we report experimental measurements of the structure and spectroscopic properties of twisted bilayers of WSe 2 and MoSe 2 in the Hstacking configuration using scanning tunneling microscopy (STM). Our experiments reveal that the moiré potential in these bilayers at small angles is unexpectedly large, reaching values of above 300 meV for the valence band and 150 meV for the conduction band -an order of magnitude larger than theoretical estimates based on interlayer coupling alone. We further demonstrate that the moiré potential is a non-monotonic function of moiré wavelength, reaching a maximum at around a 13nm moiré period. This nonmonotonicity coincides with a drastic change in the structure of the moiré pattern from a continuous variation of stacking order at small moiré wavelengths to a one-dimensional soliton dominated structure at large moiré wavelengths. We show that the in-plane structure of the moiré pattern is captured well by a continuous mechanical relaxation model, and find that the moiré structure and internal strain rather than the interlayer coupling is the dominant factor in determining the moiré potential. Our results demonstrate the potential of using precision moiré structures to create deeply trapped carriers or excitations for quantum electronics and optoelectronics. Lattice vector mismatches between two layers of a van der Waals bilayer gives rise to a moiré pattern. The moiré pattern affects the electronic structure of the bilayer, and many emergent quantum phenomena have recently been observed in these systems [4,5,8,9]. In a TMD semiconductor heterobilayer, the low-energy electronic structure can be reasonably approximated by the properties of a single layer on which a spatially de-pendent potential energy landscape is imposed (termed the moiré potential) [2,10,11]. This moiré potential when periodic gives rise to subbands within the first valence or conduction bands, which are responsible for the emergent quantum properties observed. Spatially separated interlayer excitons can also be trapped within these subbands [1,4,6,7,9,12]. Theoretical estimates based only on interlayer coupling estimate the size of this moiré potential to be of the order of a 10 millielectronvolts (meV) at small moiré wavelengths (< 5 nm) [13,14], but experimental measurements of the moiré potential remains an important open problem in these materials. Scanning tunneling microscopy is one of the few experimental techniques that can provide direct measurements of the magnitude of the moiré potential, due to its high energy and spatial resolution. Its use requires clean surfaces and conducting samples, both of which are significant challenges for TMD semiconductor layers. A few pioneering STM experiments have been performed on CVD grown [15,16], rotationally aligned bilayers and (more recently) exfoliated, twisted TMD bilayers [17][18][19][20]. All of these previous measurements have been performed for moiré wavelengths near 5 nm at rotational angles close to zero degrees between the two layers. In this work, we study the heterobilayer of MoSe 2 on WSe 2 at a range of moiré wavelengths from 5->20 nm. We avoid problems associated with sample conduction by performing our STM measurements at room temperature with a few-layer graphite substrate, under which condition the samples are sufficiently conductive. Due to the broken inversion symmetry in TMDs, there are two distinct aligned stacking configurations termed R (zero degree alignment between the two layers, also termed AA stacking in the literature) and H (180-degree alignment between the two layers, also termed AB stacking). When a twist angle is present between the layers, the atomic registry between the two layers varies periodically in space [3]. For nearly R-stacked twisted bilayers, three of the high-symmetry stacking orders that are present in the sample are shown in figure 1a, termed MM', MX', and M'X respectively. Here M and X re- . e,f Stacking energy density (in meV /nm 2 ) from mechanical relaxation calculations corresponding to the STM topographs in c,d. The high symmetry stacking configurations illustrated in a,b are marked with appropriately colored dots in c-f. g Large area STM topograph (in pm) of H-stacked WSe2/MoSe2 at an average twist angle of ∼1.7 • . The topograph shows the presence of inhomogeneous heterostrain, one-dimensional solitons, point defects in the individual layers and edge dislocations of the moiré lattice. h Calculated stacking energy density (in meV /nm 2 ) of the relaxed structure. The stacking registry was forced at selected points from the experimental picture. The image is composed of several separate calculations surrounding the observed dislocations from both sides (details in supplementary information). fer to the metal and chalcogen atoms in the top layer, while M' and X' refer to those in the bottom layer; MM' refers to the stacking where the metal atoms of the top layer are in vertical registry with the metal atoms from the bottom layer. For nearly H-stacked bilayers, the corresponding high-symmetry stackings are XX', MX'and MM' as shown in figure 1b. Shown in figure 1c and 1d are STM topographs of samples at twist angles of ∼3 • (near R stacking) and ∼61.7 • (near H stacking) respectively. We can see that the two stacking orientations present very different structures as visualized in STM. In order to understand this difference, we calculate the relaxed structure of twisted bilayers based upon a continuous mechanical relaxation model (details in methods) [21][22][23]. The results of these calcu- Spectroscopic properties of moiré patterns of different wavelengths a-e STM topographic images (in pm) of moiré patterns of different wavelength (set points of -1.7 V and 100 pA). As the wavelength is increased, the area occupied by the MM' stacking configuration (brighter area) reduces, leading from a transition from a triangular lattice at small wavelength to a strain-soliton structure at large wavelength. f-j dI/dV measurements obtained at the high symmetry stacking configurations for each of a-e. It is clearly seen that the MM' stacking configuration displays the smallest band gap, with both the conduction and valence edges shifted towards the Fermi level relative to the M-X' stacking configuration. k-o dI/dV measurements obtained over a larger energy range. We can see the band edge of the conduction band clearly, and at high negative energies the states from the MoSe2 valence bands that dominate the tunneling. The valence bands nearest the Fermi level that are seen clearly in f-j are much smaller in conductance on this scale, and are not seen clearly. dot, figure 1d). We also see quite clearly for this moiré wavelength (∼11 nm) that lattice reconstructions result in fairly sharp triangular domain boundaries between the MX' and MM' regions. All of these together allow us to clearly identify the various stacking configurations in our STM topograph in figure 1d, as indicated in the figure. We see that samples that are near H stacking present a completely different structure than samples near R stacking, which has not been studied by STM previously. For the rest of this work, we focus exclusively on this case. Having understood the details of the moiré pattern at small length scales, we proceed to perform STM measurements over large areas of nearly H-stacked samples. One such topograph is shown in figure 1g, over an area of 500x500 nm 2 . This topograph shows many interesting features, including a spatially varying moiré period, large (> 100 nm) regions of uniform MX' stacking, and onedimensional solitons. Interesting electronic and optical properties have been reported in these 1D solitons [24][25][26]. Point defects in the individual layers are seen as white dots of atomic dimensions, and edge dislocations in the moiré lattice are also observed, presumably due to impurities between the two layers of the bilayer. These features arise from the presence of impurities and nonuniform strain over this area. All of this disparate behavior can be captured with the continuum mechanical relaxation calculation shown in figure 1h. The only inputs (beyond those of the periodic case shown in figure 1f) that go into this calculation are the locations and stacking registry of selected XX' stacking points. Given this information, a detailed spatial account of the relaxed structure is resolved. Due to the existing dislocations, the process was repeated surrounding dislocations from different sides to generate the integrated map of figure 1h (see methods and supplementary information). We find that this process captures the entire complex structure of the moiré pattern, and can be used to quantitatively estimate the local strain fields producing inhomogeneities in the large scale structure in figure 1g. We now proceed to examine the structure of the moiré pattern at various length scales. Shown in figure ure 2b-d), the area of the MM' stacked region decreases at the expense of the area of the MX' stacking region. For moiré wavelengths that are larger than 20 nm, the MM' region shrinks to a shear soliton of width approximately ∼ 4 nm. Above this wavelength, moiré patterns resemble honeycomb lattices formed by the shear solitons rather than the triangular lattices seen at small wavelengths (see supplementary information for a typical image). The large moiré wavelengths are extremely susceptible to small amounts of strain, which can distort the honeycomb structure severely. An example is shown in figure 2e, where the individual honeycomb cells have been distorted to form quasi-rectangular strips of MX' stacking that are separated by soliton domain walls. We now consider the spectroscopic properties of samples exhibiting different moiré wavelengths. Shown in figure 2f-j are measurements of the differential conductance (dI/dV ) obtained at the three high symmetry locations of the moiré lattice, viz. XX', MM' and MX' for the moiré patterns shown in figure 2a-e respectively. Clear and systematic differences are seen in the spectroscopic properties of the different sites within the moiré unit cell. It is seen that the edge of both the valence band and the conduction band are closest to the Fermi level for the MM' site for all of the moiré wavelengths. The difference in valence band edges between the MM' and MX' regions reaches a maximum at a moiré wavelength around 13 nm(figure 2c), and decreases for both smaller and larger moiré wavelengths. Similar behavior is observed for the conduction band edges. The wavelength at which the moiré potential is largest corresponds structurally to the length scale where the MM' region transitions from a triangular region to a soliton. The valence band edge observed in figure 2f-j is derived from the states with primarily WSe 2 character, while the conduction band edge states are derived from states with primarily MoSe 2 character [3]. The states with WSe 2 character have a small tunnel matrix element due to the larger physical distance from the STM tip. The conduction band states, therefore, have a much higher intensity than the valence band states shown in figure 2f-j. Spectra taken over a wider bias range, shown in figures 2k-o show clearly the conduction band edges as well as deeper valence band states that are derived from the MoSe 2 layer. We use these spectra to define the edges of the conduction and valence bands (see supplementary information for more details of the procedure). The spectroscopic differences between the various regions of the moiré unit cell described above are easiest to understand by considering a single moiré unit cell of a single layer. Within this unit cell, a moiré potential energy exists that shifts the location of the band edge. Thus, the moiré unit cell can simply be considered to be a problem of a triangular quantum well with finite depth. Within a single unit cell, this gives rise to a number of confined quantum dot states [16,22]. At low energy, the states are localized inside the well while at energies above the well depth, the states are found outside the well. At room temperature, we average over closely spaced states and instead see a band edge both inside and outside the quantum well. The difference in the band edge positions inside and outside the well is then simply equal to the well depth, ie, the magnitude of the moiré potential. Our spectroscopic results indicate that the MM' region of the moiré unit cell is the region with a potential minimum for both the valence and conduction band, ie, the low energy physics of this system is dominated by electrons or holes trapped within these regions. A cursory inspection of figure 2f-j also reveals that this trapping potential is large -around 300 meV at its largest in the valence band, and 100 meV for the conduction band. This consideration becomes especially interesting for large moiré wavelengths, where the MM' regions shrink to soliton lines. Our results indicate that carriers are confined to these one-dimensional lines at low energies in these structures. This quantum well picture that is based on a single moiré unit cell is only slightly modified by the periodic boundary conditions imposed by the moiré pattern -the coupling between neighboring wells broadens each of the eigenstates within the well to a "flat band" with width determined by inter-well coupling [10]. In order to understand the systematic evolution of the moiré potential as a function of moiré wavelength, we can utilize strain-induced inhomogeneity in the sample to our advantage. Shown in figure 3a is a region of the sample where the moiré wavelength interpolates continuously between a minimum of 5 nm to (> 20 nm). We proceed to take dI/dV spectroscopic measurements at ev- Having obtained separately the conduction and valence band edges for the MM' and MX' sites in figure 3, we can utilize this information to extract the moiré potential as a function of wavelength, which is simply the energy difference in band positions between a MM' region and its MX' neighbors(see supplementary information for details of the analysis). This is plotted in figure 4a for both the conduction and valence band. We can see clearly that the moiré potential is large and non-monotonic for both the conduction and valence band edges. In previous theoretical considerations, the hybridization between the two layers of the heterostructure has been considered to play a dominant role in the moiré potential [27]. For WSe 2 /MoSe 2 , the hybridization differences between bilayers with uniform MM' and MX' stacking order are small (of order 10 meV), and thus there has been a theoretical expectation that the moiré potential is also similarly small in magnitude. Our results indicate that the true moiré potential is far larger than estimations based on uniform stacking order. Some insight into this difference can be gained from previous STM experiments on moiré patterns in TMD bilayers [15,16,18,28]. In all of these works, the observed moiré potential is significantly larger than the expectation based on stacking order alone. This is the case even though these vari-ous experiments are on different materials from ours and also are in the R stacking configuration. The common feature of all of the works (including ours) is that real moiré patterns feature structural distortions in both the lateral and vertical dimensions -these distortions must, therefore, be dominating the moiré potential. Distortions within the moiré unit cell give rise to significant lateral and vertical strain. Due to the large unit cell sizes, we consider in our work, accurate ab-initio calculations of the electronic properties of the moiré are not feasible. We expect significant vertical strain to be present in the structure. For uniformly stacked bilayers, the MM' stacking configuration displays a significantly larger c-axis lattice constant in comparison to MX'(6.7Å instead of 6.2Å). We thus expect that within the moiré structure, the MM' regions are being compressed down by the MX' regions, while the MX' regions are under tensile strain along the c axis. These strains can give rise to significant changes in electronic structure. Shown in figure 4b is a DFT calculation of mechanical relaxation model of a uniform MM' stacked region at an interlayer distance of 6.7 A (equilibrium) and 6.5 A (compressed). This compression gives rise to about a 0.2 eV shift in the valence band position at the Γ point in a direction that is consistent with the experimental finding. We also expect significant lateral strains to be present within the unit cell, which we estimate using our mechanical relaxation model within the moiré unit cell. We write the strain tensor as = C I + S (cos (2φ)σ Z + sin (2φ)σ x ) where C is an isotropic compression and S is a volume-preserving shear, and I is the identity matrix and σ Z , σ x are the Pauli matrices in standard notation. The magnitude S is plotted within the moiré unit cell in figure 4c for a moiré wavelength of ∼ 10 nm ( C is small at this wavelength). The total variation in strain across the unit cell is seen to be > 3% as can be seen in the line cut of strain tensor elements shown in figure 4d. For comparison, a uniform tensile strain of a percent changes the band gap of TMDs by ≈ 0.2 eV [24]. It is thus no surprise that these large values of strain in the moiré unit cell dominate the electronic properties. Fig. 4e shows the maximal value of s as a function of the moiré wavelength from mechanical relaxation calculations. It is interesting to note that the shear strain also has a non-monotonic behavior as a function of moiré wavelength. This similarity in behavior to the experimentally observed moiré potential further supports the hypothesis that relaxation induced strain is the source of the observed enhanced moiré potential. Our results show that the moiré potential in TMD heterobilayers is substantially larger than previous expectations, and can reach values of several hundred millivolts. Such large trapping potentials can be extremely useful in confining charge carriers as well as excitons and enhancing interactions between them. At the same time, our results show that the largest moiré potentials are realized for a narrow range of angles, and engineering high-quality structures with uniform moiré lattices with these wavelengths remains an open problem. Device fabrication Monolayers of WSe2 and MoSe2 were obtained by mechanical exfoliation from self-flux grown bulk crystals [29]. The relative orientation between two TMD monolayers was determined by second harmonic generation(SHG). We used polypropylene carbonate (PPC) to pick up an h-BN flake, few-layer graphite, and WSe 2 and MoSe 2 monolayers, respectively, using a high-precision rotation stage. In the final stage, the sample was flipped on a Si/SiO2 substrate at elevated temperature 120 • C. STM Measurements STM and STS data were acquired at room temperature in ultra high vacuum conditions. A lock-in amplifier with modulation of 25 meV and 917 Hz was used for dI/dV spectroscopy measurements. Second-Harmonic-Generation Measurement SHG measurements were used to determine the crystal orientations of WSe2 and MoSe2 monolayers. Linearly polarized femtosecond laser light (Spectrum Physics Tsunami, 80MHz, 800nm, 80 fs) was focused onto a monolayer with a 100 objective (Olympus LM-PLFLN100X). The reflected SHG signal at 400nm was collected using the same objective and detected by a photomultiplier tube (Hamamatsu R4220P) and recorded with a photon counter (BK PRECISION 1823A 2.4GHz Universal Frequency Counter). CVD grown triangular shape monolayer MoS2 (2D Layer) were used to calibrate the SHG setup DFT Calculation We use a slab structure to model the WSe2/MoSe2 heterostructure. To avoid artificial interactions between the polar slabs, we place two oppositely oriented WSe2/MoSe2 units with the mirror symmetry in the slab. Each slab is separated from its periodic images by 15 vacuum regions. Our DFT calculations were performed using the Vienna ab initio Simulation Package [30]. We use the projector augmented wave method to construct pseudopotentials [31]. The plane-wave energy cutoff is 400 eV. The exchange correlation functional is approximated by the generalized gradient approximation as parametrized by Perdew, Burke, and Ernzerhof [32]. The Brillouin zone is sampled by a 30 x 30 x 1 k-mesh. Van der Waals dispersion forces between the two constituents were taken into account using the optB88-vdW functional within the vdW-DF method developed by Klime et al [33]. Atomic Relaxation Simulation Modeling of the atomic relaxation of twisted MoSe 2 /WSe 2 was performed within a continuity model following the method presented in [34], but solved in real space. In this model, the total energy of the system is taken as the sum of elastic energy and a stacking energy term. The total energy was minimized in search for the inter-layer real space displacement field corresponding to the relaxed structure. The stacking configuration at selected XX'-stacking points were imposed as boundary condition as to account for a specific case under study. In the periodic case (Fig. 1e-f and Fig. 4c-d) four such boundary conditions were used to impose a fixed external strain condition. In the case of a non-uniform strain map as in Fig. 1f, such points were needed wherever the moiré superlattice deviated from a uniformly periodic structure. Special care was needed to describe the 11 dislocations in the image. For each dislocation the structure was relaxed using two registry maps for the XX'-stacking cites, as to describe both sides of the dislocation. The stacking energy maps were later stitched to for Fig. 1h. More details about the real space atomic relaxation simulations specific to fig. 1h are presented in the supplementary information. The mechanical relaxation parameters for the MoSe 2 /WSe 2 heterostructures were calculated using DFT as implemented in the Vienna ab initio simulation package (VASP) version 5.4.4 [30]. All geometries included a vertical (c-axis) of 25Å to ensure no interaction between periodic images. The co-linear spin-polarized electronic structure was calculated with a plane-wave cutoff of 500 eV, the VASP PAW PBE potentials (v54) [31], a broadening of 50 meV, and a self-consistency convergence criterion of 10 −6 eV. A periodic dipole correction in the c-axis to the total energy was included, and the van der Waals functional DFT-D3 (V3.0) was used [35]. The bulk modulus (K) and shear modulus (G) for each material were calculated by applying isotropic or uniaxial strain to a monolayer lattice, ranging from −1.5% to 1.5% in units of 0.3%, and then performing a quadratic fit to the strain-dependent energies. The generalized stacking fault energy function (GSFE) coefficients are extracted from a 6 × 6 sampling of the configuration between layers, with the vertical positions of the atoms relaxed at each configuration until all forces are less than 20 meV. The Fourier components of the resulting energies are then extracted to create a con-venient functional form for the GSFE used to describe the stacking energy term in the atomic relaxation calculations. The resulting GSFE coefficients and elastic coefficients used for the mechanical relaxation calculations were (following nomenclature of [34] and units of meV u.c. ): MoSe 2 : K=40521, G=26464 WSe 2 : K=43113, G=30770 c 0 =42.6, c 1 =16.0, c 2 =-2.7, c 3 =-1.1, c 4 =3.7, c 5 =0.6 The unit-cell spacing are α=0.3288 nm for MoSe 2 and α=0.3282 nm for WSe 2 . In all the mechanical relaxation calculations we assumed for simplicity one layer (WSe 2 ) to be rigid, and allow all the relaxation to happen at the other (MoSe 2 ) layer. Relaxing this condition would not affect the overall picture significantly. The periodic mechanical calculations of Fig. 1e, Fig. 1f and Fig. 4c-d assumed twist angle of 4 • , 1.5 • , and 1.88 • and external strains of 0.13%, 0.3%, and 0% respectively, all with a Poisson ratio of 0.23. Figure 1h presents a mechanical relaxation calculation aiming to reproduce different non-uniform related strain features that were measured in figure 1g. Beyond the intrinsic system properties, namely the generalized stacking fault energy function (GSFE) lattice constants and elastic properties for this system (see methods section), the simulation uses as initial and boundary conditions the locations and stacking registry of selected XX' stacking points. Here we provide further details about this process. Figure S1 presents the initial and boundary conditions used to construct figure 1h. The white circles mark locations of dislocations appearing in the measurement of figure 1g. The dislocations are lattice defects, and are not directly supported by the model. However, locally the simulation can still account for the strain maps and stacking configurations. Therefore, the calculation was divided into several separate calculations, surrounding the dislocations from different orientations (region marked by dashed lines with different colors). In each region, the colored dots mark the positions of points where XX stacking configurations were forced for a given simulation. The false-color shows the stacking energy density of the initial configuration, used as a starting point for the simulations, which was generated as an interpolation between the forced stacking configurations. The solution of figure 1h is a result of stitching these 5 calculations. S3. BAND EDGE ANALYSIS In two dimensions, the density of states ρ S (E) of an ideal semiconductor with a single valence and conduction band at T=0 features sharp band edges with a constant value of the DOS beyond the band edge. Our experiments are carried out at room temperature, which results in a broadening of the edges due to the Fermi distribution of electrons in the tip and sample at non-zero temperature. The expression for the differential conductance of the tip-sample junction is given by FIG. S3: a Theoretical calculation of the temperature-broadened differential conductance of a semiconductor with valence band edge at -0.5 V and conduction band edge at +1.0 V. The dashed lines are linear fits to the spectrum above and below the band edges, and the crossing points mark the position of the band edges. b,c Determination of the valence and conduction band edges from typical experimental spectra following the procedure outlined in a. where f (E) is the Fermi-Dirac distribution at the temperature of the measurement. The result of this process is shown in figure S3a for a hypothetical semiconductor with a valence band edge at -0.5 eV and conduction band edge at 1.0 eV. The sharp band edge develops a finite slope due to temperature broadening at non-zero temperature. A simple practical method that we adopt to define the band edges is also shown on this plot, by drawing intersecting straight lines below and above the gap edge. The use of this method is shown in practice on real data in figures S3b and S3c for the valence and conduction bands respectively. We note that the accuracy of this process is not limited by the temperature broadening of the spectrum -the determination of the band edge is ultimately set by the signal to noise ratio of the measurement. S4. ANALYSIS LEADING TO FIGURES 3D AND 4A OF THE MAIN TEXT In order to extract band edges as a function of moiré wavelength, we start by defining the moiré unit cells, based upon the positions of the XX' positions from the topograph in figure 3a of the main text (see figure S4a). We avoid moiré unit cells that have aspect ratios larger than 2. We then use Delaunay triangulation to define the moiré unit cells from the XX' positions shown in figure S4a. For each Delaunay triangle, we then find the centroid, which defines the position of MM' and MX' stacking respectively. We use the spectroscopic data of figure 3b and 3c to find the band edges at these points, and set this to the valence and conduction band edge values for each given triangle. The result of this procedure for the valence band edge is shown in figure S4b. We then take the area of the moiré unit cell and convert it to a moiré wavelength by assuming it to be an equilateral triangle in order to generate the plot in figure 3d of the main text. This necessarily introduces scatter into the data, since triangles with different aspect ratios have different spectroscopic properties. However, it allows us to represent the entire data set of band edges as a function of moiré wavelength. In order to then find the moiré potential for each moiré unit cell, we take the difference in band edges between each MX' region and the three neighboring MM' regions, as shown in figure S4c. This data is used to generate the plot in figure 4a of the main text. points in a (in eV ). c Determination of the moiré potential. The moiré potential is defined as the absolute value of the difference between a given triangle in b (shown in red) and the average of its three nearest neighbors shown in blue.
Frameless radiosurgical third ventriculostomy: Technical report Background: We describe the technical report and results of the first image-guided, linear accelerator, frameless radiosurgical third ventriculostomy. Methods: We report a 20 years old man, with diplopia, balance disturbances, and limitation for gaze supraversion. Magnetic resonance imaging resonance imaging of the brain and cranial computed tomography showed showed a left thalamic-midbrain lesion that caused partial compression of the Silvio aqueduct and mild ventricular dilatation. The biopsy revealed the diagnosis of pleomorphic xanthoastrocytoma. Before radical treatment of the tumor with fractionated stereotactic radiotherapy, the patient underwent to frameless radiosurgical third ventriculostomy, on the TrueBeam STX® platform with the ExacTrac localization system. The target used was the one defined on the floor of the third ventricle, at the midpoint between the mammillary bodies and the infundibular recess. The prescription dose was 120 Gy, given using a monoisocentric technique of multiple noncoplanar circular arches. The geometric arrangement of the plan consisted of 15 arches, with a 4 mm cone, distributed over a 110° table. Results: There was symptomatic and image improvement two days after radiosurgery. On CT, a reduction in ventricular dilation was observed with a reduction in the Evans index from 0.39 (initial CT) to 0.29 (CT at 15 days). In 3.0T magnetic resonance image at 3 months, we showed the third ventriculostomy. There have been no treatment failures or complications. Conclusion: It is possible to effectively perform the frameless radiosurgical third ventriculostomy without associated morbidity in the short term. INTRODUCTION e precise location of the target, the great spatial accuracy of the radiation beam, and the abrupt drop in the dose outside the volume of interest are essential characteristics of this procedure. To comply with these requirements, the use of a rigidly placed stereotactic guide device (stereotactic frame), other immobilization technology (mask), and/or an imageguided location system is essential. [3] Since the 2000s, treatment localization systems (TLSs) for image-guided RS (IGRS) with mask-based immobilization (frameless) have been playing a very important role, as they achieve a degree of mechanical precision comparable to the stereotactic guide system with frame, while allowing noninvasive immobilization of the patient. e mechanical precision of these systems has been extensively studied. [4][5][6][7][8] Radiosurgery finds its main indications in benign, malignant, and functional intracranial pathology since it has the capacity to eradicate or inactivate a tumor, to favor the obliteration of the anomalous vasculature in a malformative nest, to produce a precise lesion that allows interrupting neuronal pathways, or to generate a continuity solution on a specific target; this last characteristic has allowed us to transcend its indications and extend its applications. [9][10][11][13][14][15] In 2012, we reported the first radiosurgical third ventriculostomy (SRS-irdV) with frame fixation and stereotactic guidance, based on LINAC Novalis (BrainLAB, Heimstetten, Germany), as a procedure analogous to the Endoscopic third ventriculostomy (ETV), demonstrating that it could be an effective and safe therapeutic alternative, in well-selected patients, with mild obstructive hydrocephalus. [16] On this occasion, we report the procedure and results of the first image-guided (frameless) radiosurgical third ventriculostomy (IGRS-irdV), with LINAC TrueBEAM STx platform (Varian Medical Systems, Palo Alto, California, United States), and performed at the Institute National Neurology and Neurosurgery "Manuel Velasco Suarez" from Mexico City. Description of the case We present a 20-year-old man with a 1-month history of diplopia, balance disturbances, spatial disorientation, and limitation for conjugate gaze supraversion. Magnetic resonance imaging (MRI) of the brain and cranial computed tomography (CT) showed a left thalamic midbrain injury that caused partial compression of the Silvio aqueduct and mild ventricular dilation, with an Evans index (EI) of 0.39. e biopsy was obtained by neuronavigation and the histopathological report revealed the diagnosis of pleomorphic xanthoastrocytoma (WHO Grade II). Due to the aforementioned clinical and imaging characteristics and the patient's functional status, 70% on the Karnofsky scale (KPS), we proposed performing a ETV, before radical treatment of the tumor with fractionated SRS (FSRT). e patient declined the endoscopic procedure, so the radiosurgical procedure (IGRS-irdV) was chosen. Pretreatment image, simulation tomography, and image fusion Pretreatment brain MRI was performed with a 3-tesla (3.0 T) magnet resonator (General Electric [GE] Signa Twin Excite MRI Scanner, GE Medical Systems, Milwaukee, WI), with weighted volumetric sequence acquisition (SPGR) in simple T1, T2, and T1 with contrast, cuts with a thickness of 1 mm, matrix size of 512 × 512, pixel size of 0.45 mm, and without gap. For noninvasive immobilization of the skull, a threecomponent thermoplastic mask for RS was formed (BrainLAB, Heimstetten, Germany), after placing the neck support, selected based on the patient's anatomy. Subsequently, the method of immobilization was fixed to the mask base located on the universal table for tomography table from the same manufacturer. After the fixation system was placed, the images of the skull simulation CT (Siemens SOMATON Sensation 64 CT, Siemens Medical Solutions, Malvern, PA) were acquired with cuts of 0.7 mm thickness, reconstruction size of 512 × 512 matrix, and no spaces. Coregistration of pretreatment MRI with simulation CT was performed using Brainlab Elements Image Fusion (version 3.0.1.6), which uses rigid registration methods to match anatomical structures present in different image sets, to obtain precise anatomical and geometric information for the proper definition of the target. Definition of target volume and organs at risk e definition of the target and the contouring of risk organs were performed with the BrainLAB iPlan RT Image delineation software (version 4.1.2.5). e target, as described in our previous article, was located on the brain MRI, in the SPGR sequence, weighted in T1 with contrast, in the midsagittal plane, and was defined on the floor of the third ventricle, at the point medium between the mammillary bodies and the infundibular recess [ Figure 1]. e organs at risk (OAR) considered and the limiting doses used are shown in [ Table 1]. [19][20][21][22][23] Prescription dose, dose distribution, and treatment planning e prescription dose to the isocenter was 120 Gy, given using a monoisocentric technique of multiple noncoplanar circular arches. e BrainLAB iPlan RT Dose system (Version 4.5.5) was used for the planning of treatment with cone collimation, which uses the Clarkson dose algorithm for the calculation and projection of the 3D distribution of the isodose curves. e geometric arrangement of the plan consisted of 15 arches, with the 4 mm cone, distributed at 110° of table, with table angles between 60° and 310°, each arc separated by 5° of table rotation and one gantry rotation 90°. e graphic representation of the arches arrangement is shown in [ Figure 2], and [ Table 2] shows the technical aspects of the IGRT-irdV according to the International Electrotechnical Commission. [12,17] e dose distribution was elongated on the Z-axis (craniocaudal), therefore, the 115 Gy curve crossed the entire floor of the third ventricle on the same axis (approximately 2 mm in length). e 85 Gy curve, even more elongated in the craniocaudal direction, and acquiring an "ellipsoidal" shape, managed to cover, above and below, the floor of the third ventricle, with the equivalent of its same thickness, with the aim of guarantee the coverage of spatial uncertainties associated with dosimetric parameters and natural movements during the respiratory cycle. A spatial "hourglass" distribution was generated in the 24 and 12 Gy isodose curves [ Figure 3], with the aim of limiting doses to OAR. Treatment planning was carried out the same day the images were acquired, and 1 day before performing the IGRS-irdV procedure. e total planning time was approximately 180-240 min. Dose-volume histograms (DVHs) of the radiosurgical third ventriculostomy e evaluation of DVHs was carried out following the recommendations of report 91 of the International Commission on Radiation and Measurement Units (ICRU) for the prescription, recording and reporting of stereotactic [20] Chiasm, optic nerves, optic tract Dmáx <8-10 Gy [21] Brainstem Dmáx <12.5 Gy [12] Mammillary bodies* V12 <5-10 cc [22] Basilar artery* Dmáx <30 Gy [23] *ere are no defined restrictions for these structures, so we use the normal brain tissue restriction for mammillary bodies and one of the proposed restrictions for the cavernous portion of the internal carotid artery Table 2: Technical aspects of the linear accelerator based, frameless radiosurgical third ventriculostomy, International Electrotechnical Commission. [12,17] Arc number Stationary treatments with small photon beams. [17] e maximum point doses (Dmax or D0%) and the doses close to the minimum volume (D35mm 3 or D2%) of the OARs considered are shown in [ Table 3]. All the OARs met the proposed restriction doses, and only 1.46 cm 3 of healthy tissue received 12 Gy [ Figure 4]. Patient placement, treatment, and quality control e patient positioned himself on the treatment table, placed the neck support, and was immobilized with the mask. Subsequently, the BrainLAB positioning arrangement for RS frameless was placed, which provides precise positioning information to the TLS. e TLS used for image guidance was the ExacTrac6.2 (ET) system (BrainLAB AG), which integrates an optical positioning system based on infrared (IR), a radiographic positioning system consisting of the acquisition of two oblique images kV X-ray stereoscopic (X-ray 6D) and a robotic stretcher that can move in six directions. e verification of the initial position and for each table movement was performed with the coregistration of the images acquired by the ET with the digitally reconstructed radiographs (DRRs) of the simulation CT, using 6° of-freedom fusion algorithms (6DOF). [18] e information of the linear and rotational displacements at the beginning and during the treatment for each Dosimetric studies showed a variation of the prescribed dose of less than 2%. One day after the radiosurgical procedure, the left diencephalic lesion (pleomorphic xanthoastrocytoma) with a planning target volume of 12.3 ml, was treated with FSRT, using a volumetric intensity modulated arcotherapy technique, which consisted of two coplanar arches, with gantry rotation between 35° and 179°, collimator between 30° and 330°, and table at 0°. e prescription dose was 50.4 Gy, given in 28 fractions. e radiosurgical procedure was considered for the evaluation of the DVH of the fractional treatment, obtaining that in the sum plan, the final cumulative doses were below the tolerance doses for each OAR. Postprocedure course Clinical evaluation was performed every day during the first week, each week during the first month, then every 15 days for the 2 nd month, and then 1 day after acquisition of neuroimaging studies. e follow-up by image consisted of acquiring a cranial CT at 2, 7, 15, and 30 days posttreatment, Figure 4: Dose-volume histograms of organs at risk considered in the treatment plan. Here, we can see the OARs receive lesser than the tolerance; including mammillary bodies that receive lesser than 12 Gy. After the radiosurgical procedure, prednisone was started at a rate of 1 mg/kg/day with a dose reduction scheme (10 mg/ week). In the first clinical evaluation, the patient showed improvement in diplopia and balance disturbances, so the KPS scale score was increased (80%), and in the evaluation at 3 months, he was already free of symptoms ( 100% KPS). On skull CT, 2 days after treatment, we found a slight reduction in ventricular dilation with EI of 0.35; at 7 days, we found an even greater reduction in ventricular dilution with an EI of 0.32, and at 15 days, we showed the resolution of hydrocephalus, EI of 0.29, which has been maintained throughout the patient's follow-up [ Figure 5]. e 3.0 T MRI, 3 months after the procedure, which included the acquisition of the weighted images in T1, T2, and T1 with contrast, in the SPGR sequence, confirmed the presence of the third ventricle stoma (2.21 mm continuity solution on the floor of the third ventricle) [ Figure 6]. Regarding the evolution of pleomorphic xanthoastrocytoma, on MRI of the brain 3 months after treatment, we showed a partial response characterized by a reduction >50% of the lesion in its perpendicular diameters, with persistence only of the cystic portion of the disease, with a volume of 2.34 cm 3 , which has remained stable for 2 years of follow-up [ Figure 7]. During the entire follow-up, the patient did not present clinical data of neurological progression or alterations secondary to RS. e patient was irradiated 2 years ago. Nowadays, he is a laws student in the Universidad Nacional Autonoma de Mexico (UNAM). He is an excellent student, with high performance, his recent score is 98 (98/100). Endoscopic third ventriculostomy in obstructive hydrocephalus secondary to midline tumor pathology Endoscopic third ventriculostomy combined with tumor biopsy, or resection, when feasible, is the recommended primary treatment in patients with tumors of the posterior region of the third ventricle and of posterior fossa in the midline that condition obstructive hydrocephalus. [24,25] A single-or dual-port approach has proven to be safe and effective. [26] Immediate subjective symptomatic relief has been described ranging from 83% to 89% for primary ETVs. [27][28][29] e success rate of ETV in these pathologies ranges from 56 to 81%. [30][31][32] Reduction of ventricular size, evidenced by postprocedure imaging studies, is Table 4: Positioning report obtained from the ExacTrac. associated with satisfactory clinical results. [26] e failure rate within the 1 st year is between 16 and 20%, the majority occurring in the first 3 months (58-97%), and it must be taken into account that failures can develop after several years of procedure. [30,31,33,34] e etiology of the failures is multifactorial and among its causes it has been described that the size of the stoma and the insufficient flow of cerebrospinal fluid (CSF) through it can determine an early failure. e ideal stoma size has not yet been defined. [35] Global complications of ETV have been reported between 0 and 2%. [36] Radiosurgical third ventriculostomy with frame and stereotactic guide, and radiosurgical third ventriculostomy with image guided (frameless) Displacement (mm) Angle (°) (X) Lateral In general terms, the main technical differences between the SRS and the IGRS are the immobilization method (frame or mask) and the procedure guide (stereotactic or image guide). Placing an invasive stereotactic framework through mechanical fixation is safe and offers little morbidity, however, it does result in certain limitations regarding patient comfort and workflow. e introduction of frameless IGRS allows for noninvasive treatment, which does not require anesthesia and avoids the need for close patient monitoring. [37] e precision of both techniques has been shown to be comparable. [39,40] In 2012, we reported the results and procedure of the first SRS-irdV in a patient with pontine metastasis from clear cell renal cancer, causing compression of the fourth ventricle and partial obliteration of the Silvio aqueduct, with ventricular size index (VSI). On that occasion, 100 Gy were prescribed to the previously defined target, obtaining as a result subjective symptomatic improvement in the 1 st week after the procedure, with elevation of the KPS (90%), as well as reduction of the VSI of 4% in the cranial CT at 7 days. e stoma, 2.63 mm, was evidenced in the 3-month MRI of control, and in the cine phase, the patent of the CSF circulation from the third ventricle to the interpeduncular cistern was observed. e VSI was maintained between 30 and 32% throughout its follow-up (8 months), and no complications were reported. [16] ese clinical results (clinical improvement in 1 week, improvement in functional status, without data on failure, and without complications) and imaging (reduction of ventricular size on CT and evidence of stoma on MRI) are similar to those obtained in the patient undergoing IGRS-irdV. CONCLUSION e IGRS-irdV is a noninvasive and highly accurate technique, which in this case proved to be safe and effective, and whose main indication could be found in selected patients with mild obstructive hydrocephalus secondary to tumoral pathology. Declaration of patient consent e authors certify that they have obtained all appropriate patient consent. Financial support and sponsorship Nil. Conflicts of interest ere are no conflicts of interest.
On minimal Type IIB $AdS_6$ solutions with commuting 7-branes We construct Type IIB supergravity solutions with geometry $AdS_6\times S^2$ warped over a disc with two boundary points where 5-branes emerge and punctures with 7-brane monodromy. They describe $(p,q)$ 5-brane junctions with two groups of like-charged external 5-branes that are unconstrained by the $s$-rule and an additional group of constrained 5-branes. The dual 5d SCFTs include various theories discussed previously in the literature. We match SCFT operators with scaling dimension of $\mathcal O(N)$ with their representation in supergravity to support the proposed dualities. Introduction and summary Interacting five-dimensional superconformal field theories (SCFTs) do not have a conventional Lagrangian description, but they can be engineered in string and M-theory, and they often admit relevant deformations that flow to Lagrangian gauge theories in the infrared [1][2][3][4][5][6][7]. A particularly versatile approach to constructing 5d SCFTs is via (p, q) 5-brane webs in Type IIB string theory [8][9][10], and the constructions can be further generalized by including 7-branes [11]. General brane webs engineer mass deformations of the 5d SCFTs, while the conformal limits are described by junctions of 5-branes at a point. In the absence of a Lagrangian description for the 5d SCFTs, AdS/CFT dualities can be particularly useful for gaining insight into these theories. A large class of Type IIB supergravity solutions describing the near-horizon limit of (p, q) 5-brane junctions has been constructed in [12][13][14], 1 and provides the stepping stone for AdS/CFT analyses of the 5d SCFTs engineered in Type IIB string theory. The geometry takes the form AdS 6 × S 2 warped over a disc Σ. The boundary of Σ, at which the S 2 collapses to smoothly close off the ten-dimensional geometry, contains isolated points at which the 5-branes of the associated 5-brane junction emerge. Various aspects of these solutions have been studied since [29][30][31][32][33], and detailed comparisons to field theory results support the identification of these solutions as holographic duals for 5d SCFTs engineered by (p, q) 5-brane junctions [34,35]. The solutions have been extended to include punctures on Σ with commuting 7brane monodromies in [36]. 2 These solutions with monodromy correspond to 5-brane junctions where, within one group of like-charged 5-branes, multiple 5-branes terminate on the same 7-brane [34,37], and are thus constrained by the s-rule [38]. The 5d SCFTs engineered by such constrained junctions are related to those engineered by unconstrained junctions through RG flows along the Higgs branch. When considering mass deformations of the SCFT, such that the 5-brane junction becomes a 5-brane web with open faces, the 7-branes on which multiple 5-branes end may each be moved into the web. Attached 5-branes are successively removed through the (inverse) Hanany-Witten effect in the process, and the 7-branes may be moved to a face where they have no 5-branes attached. The conformal limit of this form of the brane web is described by the supergravity solutions with monodromy. The locations of the 7-branes inside the brane web are encoded in the positions of the punctures on Σ. 3 Solutions without monodromy have at least three points on the boundary of Σ where groups of like-charged external (p, q) 5-branes emerge, corresponding to the fact that three groups of 5-branes are needed to create an intersection point and thus a 5d SCFT. The aforementioned understanding of solutions with monodromy suggests that solutions with punctures should exist with only two boundary points where groups of unconstrained external 5-branes emerge, while an additional group of constrained 5-branes is realized by punctures. In this note we construct such solutions, following the strategy of [36] but relaxing one of the regularity assumptions. These solutions are minimal in the sense that the number of groups of external 5-branes can not be smaller for this class of solutions with only commuting monodromies. The dual field theories include a variety of theories that have been discussed previously in the literature. In particular, a class of 5d T N theories [38], which reduce to 4d T N theories upon compactification. The types of punctures characterizing the 4d T N theories are encoded in the 5d version in the combinatorics of how the external 5-branes terminate on 7-branes. Holographic duals for the 5d T N theories corresponding to three maximal partitions have been discussed in [34]. With the solutions constructed here this can be generalized to 5d T N theories with two maximal and one arbitrary partition. This includes the R 0,N theories of [40,41] and the χ k N theories of [41], and in particular large-N generalizations of the E 7 theory. For two subclasses of two-pole solutions we discuss in detail the associated 5-brane junctions and dual 5d SCFTs. The first is a subclass of the T N theories, which includes the R 0,N and χ k N theories and the E 7 theory, and the second one is a constrained version of the D5/NS5 intersection discussed initially in [10], which includes the S 5 theory of [34] as well as the E 6 and E 7 theories. We discuss relevant deformations of the SCFTs that flow to quiver gauge theories, in which we identify "stringy" operators with scaling dimensions of O(N ) in the large-N limit and determine their scaling dimensions. This data can be compared to results obtained from the proposed dual supergravity solutions, where the stringy operators are realized as strings. We show that the two approaches agree, to support the proposed dualities. The paper is organized as follows. In sec. 2 we review the Type IIB AdS 6 solutions. In sec. 3 we construct solutions with two boundary points where 5-branes emerge and an arbitrary number of punctures with commuting monodromies. In sec. 4 we discuss the dual field theories for two subclasses of solutions and match supergravity results to field theory. Warped AdS 6 solutions with 7-branes The geometry in the solutions of [12][13][14]36] is a warped product of AdS 6 × S 2 over a Riemann surface Σ, with metric and complex two-form given by where w is a complex coordinate on Σ and vol S 2 the canonical volume form on a unitradius S 2 . The general solution to the BPS equations is parametrized by two locally holomorphic functions A ± on Σ. The metric functions are where The function C and the axion-dilaton scalar B = (1 + iτ )/(1 − iτ ) are given by Physically regular solutions without monodromy were constructed in [13,14] for the case where Σ is a disc or equivalently the upper half plane. On the upper half plane the locally holomorphic functions are with a superscript s indicating that they are single-valued in the interior of Σ and A 0 + = −A 0 − . The differentials ∂ w A s ± have poles at isolated points r on the real line, and non-degenerate solutions need L ≥ 3. The residues Z ± are given in terms of L − 2 complex parameters s n in Σ and a complex normalization σ by There are additional regularity conditions whose precise form will not be needed here. Physically regular solutions with monodromy were constructed in [36]. We will, without loss of generality, restrict the discussion to the case of D7-brane monodromy. The additional parameters are the locations of the punctures, w i , the orientation of the associated branch cuts, parametrized by complex phases γ i with |γ i | = 1, and relative weights n i . With the locally holomorphic functions for a solution with D7-brane monodromy are where the integration contour is chosen such that it does not cross any branch cuts. The residues of ∂ w A ± at the poles r are The remaining regularity conditions constraining the parameters are , · · · , I} the set of branch points for which the associated branch cut intersects the real line in the interval (r k , ∞), J k is given by where the integral over x is along the real line. At the poles r , (p , q ) 5-branes emerge, with charges given in terms of Y + by [34] where a D5 brane has charge (±1, 0) and an NS5 brane (0, ±1). Minimal solutions with commuting 7-branes In this section we construct 2-pole solutions with punctures and SL(2, R) monodromy, and explicitly solve all regularity conditions. Regularity conditions The construction of solutions with monodromy, as summarized in sec. 2, starts out from solutions without monodromy. The conditions in (2.12) are sufficient to guarantee the regularity conditions κ 2 , G > 0 in the interior of Σ and κ 2 = G = 0 on ∂Σ. This was shown in [36] using that A s ± in (2.10) correspond to regular solutions without monodromy. In particular, κ 2 for solutions with monodromy can be expressed as If κ 2 s is positive in the interior of Σ and zero on the boundary, which requires L ≥ 3 poles in ∂ w A s ± , the properties of f guarantee that the same is true for κ 2 , thus satsifying the regularity conditions. However, while positivity of κ 2 s in the interior of Σ and its vanishing on the boundary are sufficient to guarantee regularity of κ 2 , they are not both necessary: One may allow for κ 2 s to vanish identically. The properties of f still guarantee that κ 2 = 0 on ∂Σ. If ∂ w A s + − ∂ w A s − is non-zero throughout Σ, κ 2 is also positive in the interior of Σ. This permits the construction of 2-pole solutions. The discussion of the remaining regularity conditions proceeds analogously to the case with three or more poles, and leads to the conditions (2.12) with L = 2. SL(2, R) automorphisms of H The explicit expressions in the constructions to follow can be simplified by a convenient choice of coordinate on Σ. The SL(2, R) automorphisms of the upper half plane act as They can be used to fix the position of both poles, and we choose them symmetric with respect to reflection across the imaginary axis as This leaves a family of residual SL(2, R) transformations. Namely, those with c = b, d = a and a 2 = 1 + b 2 . They can be used to map an arbitrary point in the interior of the upper half plane to the imaginary axis. Ansatz Since the residues as given in (2.8) sum to zero by construction, for L = 2 we have The differentials ∂ w A s ± are given by This leads to identically vanishing κ 2 s and a solution without monodromy would be degenerate. For solutions with non-trivial monodromy, however, κ 2 as given in (3.1) is positive in the interior of Σ, provided that With the poles as in (3.3), the locally holomorphic functions (2.10) take the form with f given in (2.9). We use the residual SL(2, R) transformations discussed in sec. 3.2 to fix, without loss of generality, one puncture to lie on the imaginary axis, Solving the regularity conditions The remaining regularity conditions are given in (2.12). The first set of conditions, (2.12a), for L = 2 and the choice of poles in (3.3) becomes With w 1 fixed as in (3.8), the I = 1 condition implies A 0 + therefore has to be purely imaginary. The remaining conditions in (3.9) reduce to This implies that all punctures have to be on the imaginary axis. They may be parametrized more conveniently as It remains to solve the second set of conditions, (2.12b). Due to (3.4), and that Y k = 0 due to (3.6), the conditions reduce to Since the J k are imaginary by construction, the first condition simply fixes A 0 + . The second term in the definition of J k in (2.13) vanishes for L = 2. Explicit evaluation, using (3.11), yields The integrand is odd under x → −x, which is sufficient to show that J 1 − J 2 = 0, such that the second condition in (3.12) is satisfied. Summary and parameter count In summary, for an arbitrary choice of α i ∈ R + , n i ∈ R and γ i with |γ i | = 1, for i = 1, .., I, as well as a complex Z + with non-vanishing real part, there is a regular 2-pole solution. The locally holomorphic functions are with Z − = −Z + and The residues at the poles, which translate to the charges of the external 5-branes, are The requirement for Z + to have non-vanishing real part corresponds to the fact that a brane configuration with only D5 and D7 branes does not realize a 5d theory. For the construction we have assumed punctures with D7-brane monodromy. That is, the SL(2, R) monodromy around the punctures at w i = iα i is given by A solution with generic but commuting SL(2, R) monodromies can be obtained by an SL(2, R) ∼ = SU(1, 1) transformation parametrized by u, v ∈ C with |u| 2 − |v| 2 = 1 [36] A An appropriate choice of transformation is This introduces another independent parameter for the general solution, which is the ratio of p and q. The SL(2, R) automorphisms of the upper half plane are entirely fixed by the choices of poles in (3.3) and of the first puncture in (3.8). We thus recover a total of 3 + 3I parameters for a solution with I punctures. 5-brane junctions and 5d SCFTs We discuss two classes of 5d SCFTs. The holographic duals for the first class are generically three-pole solutions which reduce to two-pole solutions for particular choices of the parameters. The holographic duals for the second class are generically two-pole solutions. We discuss field theory operators which can be identified in the holographic duals and show that their properties match in the two descriptions. Figure 1. On the left the T N,K,j junction with N = 4, K = j = 2, which realizes the E 7 theory. On the right hand side for N = 5, j = 1, K = 3. The T N,K,j theory The 5d T N theories are realized by triple junctions of N D5, N NS5 and N (1, 1) 5branes [38]. They are characterized by the way in which the external 5-branes of the junction are partitioned into groups terminating on the same 7-brane. We discuss the case of unconstrained NS5 and (1, 1) 5-branes, but with j groups of K > 1 D5 branes each terminating on one D7-brane, leaving N − jK unconstrained D5 branes ( fig. 1). In the notation of [42,43] for the partitions, this corresponds to For N = 4 and j = K = 2 this is the E 7 theory [38]. For j = 0 or K = 1 the T N,K,j theories reduce to the unconstrained T N theories, for which the supergravity duals were discussed in [34]. The R 0,N theories of [40,41] correspond to K = N − 2, j = 1, and the χ k N theories of [41] are contained as χ k N = T N,N −k−1,1 . The T N,K,j theories are obtained from the unconstrained T N theories by RG flows along the Higgs branch, and the global symmetry is reduced from SU (N ) 3 to Gauge theory deformations of these theories can be read off from the mass deformation shown in fig. 2, where those D7-branes which have multiple D5-branes attached were moved to a position where they have no D5-branes attached. For N > jK the quiver gauge theory is where α is an SU (N − K) index andβ an SU (j) index. The scaling dimension is This operator is in the (N − jK,j) representation of the SU (N − jK) × SU (j) part of the global symmetry. In the 5-brane junction it is realized by a fundamental string connecting D7-branes and D5-branes as shown in fig. 2. There are further meson operators in the field theory which correspond to string junctions. For jK = N there are no unconstrained D5-branes. For j = 2 and N = 2K, the configuration in fig. 2 is a gluing of two unconstrained T K theories, gauging their SU (K) flavor symmetries. In the gauge theory deformation the D7 branes add flavor to the gauged SU (K) node, and the quiver gauge theory is (4.6) [2] For N = 4, K = j = 2, this is SU (2) with six flavors, realizing the E 7 theory. For j > 2 and N = jK the gauge theory deformation is | (4.7) [2] The meson operators in these theories correspond to string junctions. The T N,K,j solution Supergravity duals for the T N,K,j theories can be realized by considering the brane web for a general mass deformation of the SCFT and moving the D7 branes with multiple D5-branes ending on them into the web, as shown in fig. 3. The supergravity solution corresponding to the conformal limit of the resulting brane web then takes the form shown in fig. 4, with three poles on the boundary of Σ, corresponding to unconstrained D5, NS5 and (1, 1) 5-branes, as well as a puncture with D7-brane monodromy. For N = jK, the D5-brane charge at the pole vanishes and the solution reduces to a two-pole solution. For the explicit realization Σ is taken as the upper half plane and the general ansatz has three poles. The SL(2, R) automorphisms can be used to fix This exhausts the freedom to make SL(2, R) transformations and the punctures can not from the outset be fixed to the imaginary axis. The residues corresponding via (2.14) to the 5-brane charges in fig. 4 are For jK = N the residue at r 2 vanishes and the solution reduces to a two-pole solution. The appropriate monodromy for the puncture is realized by fixing in (2.9) All relevant brane charges, N , K and j, are manifest in the supergravity solution. The parameters realizing the residues (4.9), via (2.8) and (2.11), and solving the regularity conditions (2.12) are with A + 0 = 1 2 J 1 + Y 2 + ln 2. As jK approaches N from below, the zero s 1 (s 1 ) of ∂ w A + (∂ w A − ) approaches the pole at r 2 , annihilating it for N = jK, such that the configuration reduces to a two-pole solution. The puncture is fixed to the imaginary axis by the regularity conditions, and the branch cut intersects the pole at r 2 . As discussed in [37] this does not affect the regularity conditions for poles with imaginary residue. String embedding Holographically, the meson operator (4.4) is realized as a string embedded into the solution. The equation of motion for (p, q) strings embedded along the time coordinate of global AdS 6 and a one-dimensional curve in Σ, as well as the expressions for the scaling dimension and R-charge of the dual operator, were derived in sec. 4.1 of [34]. The equation of motion for the embedding function w(ξ) reads where q T Mq = e 2φ (p − qχ) 2 + e −2φ q 2 . Scaling dimension and charge are given by (p Re(dC) − q Im(dC)) . (4.14) For the three-pole solutions corresponding to the T N,K,j junctions, we expect to find a fundamental string connecting one of the j D7-branes on which K constrained 5-branes end to one of the unconstrained D5-branes ( fig. 2). This string naturally transforms in the (N − jK,j) representation of the SU (N − jK) × SU (j) part of the global symmetry (4.2). We indeed find a fundamental string embedded into the solution, connecting the D7-brane puncture and the D5-brane pole along the imaginary axis in the upper half plane. This embedding lies on the branch cut, but the terms in the equation of motion and the integrands in (4.13), (4.14) are single-valued for (p, q) = (1, 0). The embedding solves the equation of motion and yields The scaling dimension is related to the charge by the expected BPS relation and both agree with the field theory result (4.5) at large K. The + N,M,k,j theory The second class of 5d SCFTs is realized by quartic junctions, constructed from two groups of M unconstrained NS5 branes and two groups of N D5-branes. One group of N D5-branes is partitioned into k > N/M subgroups of N/k that end on the same D7-brane, the other group of N D5-branes is partitioned into j > N/M subgroups of N/j that end on the same D7-brane, fig. 5(a). The + N,M,k,j and + N,M,j,k theories are related by a parity transformation. For k = j = N the + N,M,k,j junctions include the unconstrained D5/NS5 intersection discussed initially in [10], and for j = N they include the + N,M,k theories of [34]. The global symmetry of the + N,M,k,j SCFT is Figure 5. The + N,M,k,j theory with N = 6, M = 5, j = 3 and k = 2 in 5(a), along with a D1 (red vertical curve) and an F1 string (blue horizontal curve) representing the stringy operators. Fig. 5(b) shows a mass deformation for M ≥ N j + N k . We expect stringy operators in the (1, 1, k,j) and (M,M, 1, 1) representations of this group, realized as F1 and D1 strings as shown in fig. 5(a). For k = 1 (or j = 1), the + N,M,k,j junction is equivalent to a T N theory. This can be seen for k = 1 by moving the single D7-brane on the right hand side in fig. 5(a) In particular, the + 2,4,1,1 junction gives an alternative realization of the E 7 theory and the + 2,3,1,2 junction gives an alternative realization of the E 6 theory. Moreover, for M = 5 and k = j = 1 with either N = 2 or N = 3 the + N,M,k,j junction realizes the S 5 theory of [41]. For j = k, moving the k D7-branes on the right all the way to the left and the j D7-branes on the left all the way to the right leads to another intersection of the + N,M,k,j form. We find the equivalence + N,M,k,k = + kM −N,M,k,k . In particular, for j = k = N/(M − 1) the + N,M,k,j junctions are equivalent to unconstrained junctions. The + N,N +1,1,1 junctions are equivalent to + 1,N +1,1,1 junctions, which have an S-dual description as (N + 1) 2 free hypermultiplets. For the discussion of more general gauge theory deformations we restrict the parameters to M ≥ N k + N j (in addition to k, j > N/M ). The case of smaller M can be treated analogously. The gauge theory can be read off after mass deforming and moving the D7-branes as in fig. 5(b). The quiver gauge theory is given by For M = N k + N j there is one SU (N ) gauge node in the center with [k + j] flavors. For k = 1 the would-be (1) on the right end is replaced by two fundamental hypermultiplets, [2], and likewise on the left end for j = 1. The SU (k) × SU (j) part of the SCFT global symmetry is manifest in the gauge theory as flavor symmetry. The stringy operators in the gauge theory include the mesons where α is an SU (j) index andβ an SU (k) index. This operator is in the (k,j) representation of SU (k) × SU (j) and the scaling dimension is Moreover, there are SU (k) × SU (j) singlet operators with scaling dimension 3/2N , where the x i are bifundamentals between SU (N ) gauge nodes and we introduced the short hand notation det( These are components of the (M,M, 1, 1) SCFT operator. In the 5-brane junction the meson operator (4.18) is expected to be realized by a fundamental string, and the operators in (4.20) by a D1-brane, as shown in fig. 5(a). The + N,M,k,j solution The supergravity solution corresponding to the junction in fig. 5(a) is constructed by moving the D7-branes into the web, as shown in fig. 6, and given by a two-pole solution with two D7-brane punctures, as illustrated in fig. 7. Unlike for the solutions discussed so far, the brane charges in the solution alone do not uniquely identify the associated 5-brane junction. The number of NS5-branes, M , and the numbers of D7-branes, k and j, are manifest in the solution as residues and monodromies, but the total number of D5-branes, N , is not. The location of the punctures, however, is expected to be different for different N . We will use one of the stringy operators discussed above to precisely identify the supergravity solution associated with a given 5-brane junction. The two punctures in the supergravity solutions corresponding to the + N,M,k,j theories are realized by fixing I = 2 in (2.9) and The branch cuts do not intersect for α 1 < α 2 . With this choice of branch cut orientations, f (x) = −f (−x) for x ∈ R. A convenient parametrization for the locations of the punctures is Without loss of generality we can choose 0 < θ 1 , θ 2 < π/2. The residues realizing the NS5-brane poles via (2.14) are The first equality implies f (1) = f (−1) = 0, and these residues are realized for This fixes one of θ 1 , θ 2 . In addition to the 5-brane junction parameters M , k, j, which are realized directly in the supergravity solution as poles and punctures, the solution has one real degree of freedom in θ 1 , θ 2 , which encodes N . The precise relation will be determined next. Figure 7. Disc representation of the + N,M,k,j solution. The straight horizontal blue line represents a fundamental string stretching between the D7-brane punctures, the solid red curve represents a D1 brane stretching between the NS5-brane poles. String embeddings The string theory realization of the (1, 1, k,j) and (M,M, 1, 1) operators as F1 and D1 strings, respectively, as shown in fig. 5(a), leads to the supergravity representation shown in fig. 7. The brane intersection in fig. 6(a) has a Z 2 symmetry acting as reflection across the horizontal line along which the branch cuts are oriented. In the upper half plane with poles at r 1 = −r 2 = 1, this symmetry acts as reflection across the imaginary axis. An embedding of a fundamental string along a straight line connecting the two punctures on the imaginary axis consequently solves the equations of motion. Scaling dimension and charge of the string are ∆ F1 = 3Q F1 , Q F1 = 1 2 − θ 1 + θ 2 π M . (4.25) This also satisfies the expected BPS relation. In the limit θ 1 , θ 2 → 0, in which the punctures approach the boundary, the scaling dimension reduces to the result for the F1 operator in the + N,M theory [34], as expected. The embedding of the D1 brane is less straightforward to find, but the scaling dimension can be determined as follows. For k = j there is a second Z 2 symmetry, which on the disc as in fig. 7 acts as reflection across the vertical diameter. Embedding the D1 along the straight line connecting the poles consequently solves the equations of motion for j = k. Deforming the solution to k = j will deform the embedding, in general to a curve as shown in fig. 7. The R-charge (4.14), however, can be determined from the end points of the D1-brane alone. This yields Q D1 = M kθ 2 π . (4.26) The scaling dimension is then fixed by the BPS relation, to ∆ D1 = 3Q D1 . Since the D1 is expected to realize the operator (4.20), with ∆ = 3/2N , this result can be used to solve for N in terms of θ 1 or θ 2 , which yields N = 2M kθ 2 π = 2M jθ 1 π . (4.27) This completes the identification of parameters in the supergravity solution with those of the 5-brane junction. With the expression for N in (4.27), the scaling dimension of the F 1 operator in (4.25) becomes This agrees with the field theory result (4.19) at large N , M , providing a non-trivial consistency check and confirming the interpretation of the supergravity solutions as holographic duals for the + N,M,k,j theories. The restriction α 1 < α 2 , which ensures that the branch cuts do not intersect in the supergravity solution, translates to N j + N k < M . This is the assumption used to derive the quiver (4.17). Supergravity solutions for smaller values of M can be constructed by first rotating the branch cuts in the brane intersection in fig. 6(a) and then identifying the corresponding two-pole solution.
Coupling hidden Markov models for the discovery of Cis-regulatory modules in multiple species Cis-regulatory modules (CRMs) composed of multiple transcription factor binding sites (TFBSs) control gene expression in eukaryotic genomes. Comparative genomic studies have shown that these regulatory elements are more conserved across species due to evolutionary constraints. We propose a statistical method to combine module structure and cross-species orthology in de novo motif discovery. We use a hidden Markov model (HMM) to capture the module structure in each species and couple these HMMs through multiple-species alignment. Evolutionary models are incorporated to consider correlated structures among aligned sequence positions across different species. Based on our model, we develop a Markov chain Monte Carlo approach, MultiModule, to discover CRMs and their component motifs simultaneously in groups of orthologous sequences from multiple species. Our method is tested on both simulated and biological data sets in mammals and Drosophila, where significant improvement over other motif and module discovery methods is observed. 1. Introduction. Gene transcription is regulated by interactions between transcription factors and their binding sites on DNA. The analysis of genomic sequences for short sequence elements (cis-regulatory elements) that mediate such interactions is an important problem in computational biology. In this paper we develop a method for predicting cis-regulatory elements based on the statistical modeling of combinatorial control by multiple transcription factors, and of cross-species conservation of the regulatory roles of these factors. The remaining part of this Introduction provides a review of relevant literature and background of our approach. Section 2 presents our Please note that our model, to be developed in this section, is applicable to the situation where different genes have distinct numbers of orthologs, that is, some orthologs are missing for some genes. For notational ease, missing orthologs are treated as zero-length sequences, so that we always assume n sequences from each species. 2.1. The HMM for module structure. Let us first focus on the module structure in one sequence. We assume that the sequence is composed of two types of regions, modules and background. A module contains multiple TFBSs separated by background nucleotides, while background regions contain only background nucleotides. Accordingly, we assume that the sequence is generated from a hidden Markov model with two states, a module state (M ) and a background state (B). In a module state, the HMM either emits a nucleotide from the background model (of nucleotide preference) θ 0 , or it emits a binding site of one of the K motifs (PWMs) Θ 1 , Θ 2 , . . . , Θ K . The probability for emission from θ 0 and Θ k (k = 1, 2, . . . , K) is denoted by q 0 and q k , respectively ( K k=0 q k = 1) [ Figure 1(A)]. Note that a module state can be further decomposed to K + 1 states, corresponding to withinmodule background (M 0 ) and K motif binding sites (M 1 to M K ), that is, M = {M 0 , M 1 , . . . , M K }. Assuming that the width of motif k is w k , a binding site of this motif, a piece of sequence of length w k , is treated as one state of M k as a whole (k = 1, 2, . . . , K). The transition probability from a background to a module state is r, that is, the chance of initiating a new module is r. The transition probability from a module state to a background state is t, that is, the expected length of a module is 1/t. We denote the transition matrix by This model can be viewed as a stochastic version of the hierarchical mixture model (HMx) defined in Zhou and Wong (2004). 2.2. Coupling HMMs via multiple alignment. The HMMs in different orthologs are coupled through multiple alignment, so that the hidden states of aligned bases in different species are collapsed into a common state [Figure 1(B)]. For instance, the nucleotides of state 4 in the three orthologs are aligned in Figure 1(B). Thus, these three states are collapsed into one state, which determines whether these aligned nucleotides are background or binding sites of a motif. (Note that these aligned nucleotides in different orthologs are not necessarily identical.) Here hidden states refer to the decomposed states, that is, B and M 0 to M K , which specify the locations of modules and motif sites. This coupled hidden Markov model (c-HMM hereafter) has a natural graphical model representation [lower panel of Figure (upper panel) and its corresponding graphical model representation of the c-HMM (lower panel). The nodes represent the hidden states. The vertical bars in the upper panel indicate that the nucleotides emitted from these states are aligned and thus collapsed in the lower panel. Note that a node will emit w k nucleotides if the corresponding state is M k (k = 1, . . . , K). (C) The evolutionary model for motifs using one base of a motif as an illustration. The hidden ancestral base is Z, which evolves to three descendant bases X (1) , X (2) and X (3) . Here the evolutionary bond between X (1) and Z is broken, implying that X (1) is independent of Z. The bond between X (2) and Z and that between X (3) and Z are connected, which means that X (2) = X (3) = Z. 1(B)], in which each state is represented by a node in the graph and the arrows specify the dependence among them. The transition (conditional) probabilities for nodes with a single parental node are defined by the same T in (1). We define the conditional probability for a node with multiple parents as follows: If node Y has m parents, each in state Y i (i = 1, 2, . . . , m), then we have where C B and C M are the numbers of the parents in states B and M , respectively (m = C B + C M ). This equation shows that the transition probability to a node with multiple parents is defined as the weighted average from the parental nodes in background states and module states. We use the same emission model described in the previous section for unaligned states. For aligned (coupled) states, we assume star-topology evolutionary models with one common ancestor, although the method can be readily generalized to a tree topology. The c-HMM first emits (hidden) ancestral nucleotides by the emission model defined in Figure 1(A), given the coupled hidden states. Then, different models are used for the evolution from the ancestral to descendant nucleotides depending on whether they are background or TFBSs. 3. The evolutionary model. A neutral substitution matrix is used for the evolution of aligned background nucleotides, both within and outside of modules, with a transition rate of α and a transversion rate of β: where the rows and columns are ordered as A, C, G and T , and µ b = α + 2β is defined as the background mutation rate. We assume an independent evolution for each position (column) of a motif under the nucleotide substitution model of Felsenstein (1981), which was also used in Sinha, van Nimwegen and Siggia (2003). Suppose the weight vector of a particular position in the motif is θ. The ancestral nucleotide, denoted by Z, is assumed to follow a discrete distribution with the probability vector θ on {A, C, G, T }. If X is a corresponding nucleotide in a descendant species, then either X inherits Z directly (with probability µ f ) or it is generated independently from the same weight vector θ (with probability 1 − µ f ). The parameter µ f , which is identical for all the positions within a motif, reflects the mutation rate of the TFBSs. This model takes PWM into account in the binding site evolution, which agrees with the nonneutral constraint of TFBSs that they are recognized by the same protein (TF). It is obvious that, under this model, the marginal distribution of any motif column is identical in all the species. This evolutionary model introduces another hidden variable which indicates whether X is identical to or independent of Z for each base of an aligned TFBS. We call these indicators evolutionary bonds between ancestral and descendant bases [ Figure 1(C)]. If X = Z, we say that the bond is connected; if X is independent of Z, we say that the bond is broken. Gibbs sampling and Bayesian inference. 3.1. Basic framework. Our full model involves the following parameters: the transition matrix T defined in equation 1, the mixture emission probabilities q 0 , q 1 , . . . , q K , the motif widths w 1 , . . . , w K , the PWMs Θ 1 , . . . , Θ K , the background models for ancestral nucleotides and all current species, and the evolutionary parameters α, β and µ f . We take as input the number of TFs, K and the expected module length, L, and fix the transition probability t = 1/L in T . Compared to the HMx model in Zhou and Wong (2004), this model has three extra free parameters, α, β and µ f , related to the evolutionary models. Independent Poisson priors are put on motif widths and flat Dirichlet distributions are used as priors for all the other parameters. With a given alignment for each ortholog group, we treat as missing data the locations of modules and motifs (i.e. the hidden states), the ancestral sequences and the evolutionary bonds. We develop a Gibbs sampler (called MultiModule, hereafter) to sample from the joint posterior distribution of all the unknown parameters and missing data. To consider the uncertainty in multiple alignment, we adopt an HMM-based multiple alignment [Baldi et al. (1994), Krogh et al. (1994)] conditional on the current parameter values. This is achieved by adding a Metropolis-Hastings step in the Gibbs sampler to update these alignments dynamically according to the current sampled parameters, especially the background substitution matrix Φ [equation (3)]. In summary, the input data of MultiModule are groups of orthologous sequences, and the program builds an initial alignment of each ortholog group by a standard HMM-based multiple alignment algorithm. Then each iteration of MultiModule is composed of three steps: (1) Given alignments and all the other missing data, we update motif widths and other parameters by their conditional posterior distributions; (2) Given current parameters, with probability u, we update the alignment of each ortholog group; (3) Given alignments and parameters, a dynamic programming approach is used to sample module and motif locations, ancestral sequences and evolutionary bonds. (See the Appendix for the details of the Gibbs sampling of Multi-Module.) The probability u is typically chosen in the range [0.1, 0.3]. Motif and module predictions are based on their marginal posterior distributions constructed by the samples generated by MultiModule after some burn-in period (usually the first 50% of iterations). We estimate the width of each motif by its rounded posterior mean. We record the following posterior probabilities for each sequence position in all the species: (1) P k , the probability that the position is within a site for motif k, that is, the hidden state is M k (k = 1, 2, . . . , K); (2) P m , the probability that the position is within a module, that is, the hidden state is M ; (3) P a , the probability that the position is aligned. All the contiguous segments with P k > 0.5 are aligned (and extended if necessary) to generate predicted sites of motif k given the estimated width w k . The corresponding average P a over the bases of a predicted site is reported as a measure of its conservation. We collect all the contiguous regions with P m > 0.5 as candidates for modules, and a module is predicted if the region contains at least two predicted motif binding sites. The boundary of a predicted module is defined by the first and last predicted binding sites it contains. We use 0.5 as the threshold for posterior probabilities. This threshold determines the trade-off between the sensitivity and the specificity of the predictions. In our experience, values in the range of [0.5, 0.7] for the threshold usually give good performance for the posterior inference on the model. Smaller threshold often results in many false positive predictions. Under the c-HMM, if we fix r = 1 − t = 1 in the transition matrix T [equation (1)], then MultiModule reduces to a motif discovery method, assuming the existence of K motifs in the sequences. This setting is useful when the motifs do not form modules, and we call it the motif mode of MultiModule. 3.2. Multimodality and combined prediction. Although MultiModule converges to the target posterior distribution eventually, from the examples in Section 5 we find that it usually reaches some local mode quickly and then moves around the mode for a long time. Since the waiting time for betweenmode transition is exponentially long, we often run multiple short chains for MultiModule instead of one long chain, that is, we apply MultiModule to a particular data set multiple times with random initialization. In this way, it has a much greater chance to explore different major local modes which often correspond to different motif compositions of predicted modules. However, we note that our module prediction is quite consistent and that major motifs are usually predicted repeatedly in multiple runs (see Section 5.1). We employ a heuristic to select representatives of these motifs by ranking all the predicted motifs according to a Bayesian score derived in Jensen et al. (2004): where j = A, C, G, T , n is the number of predicted sites,Θ is the estimated motif weight matrix constructed by predicted sites, θ 0 is the background distribution, and w is the width of the motif. The parameter ρ (<1) is the prior odds of observing a motif site over a background nucleotide. We take ρ = 1/500, which gives good balance between the specificity of a predicted motif pattern and the number of predicted sites. Using the average P m of each sequence position over these multiple runs and the top K distinct motifs ranked by (4), we define combined-predicted modules by the same criterion introduced in Section 3.1. 4. Simulation studies. Transcription factors Oct4, Sox2 and Nanog are believed to cooperate in the regulation of genes important to self renewal and pluripotency of embryonic stem (ES) cells [Boyer et al. (2005)]. We used the following model to simulate data sets in this study: We generated 20 hypothetical ancestral sequences, each of length 1000 bps. Twenty modules, each of 100 bps and containing one binding site of each of the three TFs, were randomly placed in these sequences. TFBSs were simulated from their known weight matrices with logo plots [Schneider and Stephens (1990)] shown in Figure 2. Then based on the choices of the background mutation rate µ b [with α = 3β in equation (3)] and the motif mutation rate µ f , we generated sequences of three descendant species according to the evolutionary models in Section 2.3. The indel (insertion-deletion) rate was fixed to 0.1µ b . After the ancestral sequences were removed, each data set finally contains 60 sequences from three species. Our simulation study was composed of two groups of data sets, and in both groups we set µ f = 0.2µ b but varied NOTE: N2 and N1 refer to the numbers of correct and total predictions for each motif, respectively. TF names are followed by the numbers of true sites in parentheses. The upper and lower halves refer to the average results over 10 independently generated data sets with µ b = 0.1 and 0.4, respectively. "Overall" is the geometric average of sensitivity and specificity. For each data set, the optimal results (in terms of overall score) among three independent runs under the same parameters were used for the calculation of averages. the value of µ b . In the first group, we set µ b = 0.1 to mimic the case where species are evolutionarily close. In the second group, we set µ b = 0.4 to study the situation for remotely related species. For each group we generated 10 data sets independently. We applied MultiModule to these data sets under three different sets of program parameters: (A) Module mode, L = 100, u = 0.2; (B) Motif mode, u = 0.2; (C) Motif mode, u = 0. For each set of parameters, we ran Mul-tiModule for 2,000 iterations with K = 3, searching both strands of the sequences. Initial alignments were built by ordinary HMM-based multiple alignment methods. If u = 0, these initial alignments were effectively fixed along the iterations. The results are summarized in Table 1, which includes the sensitivity, the specificity and an overall measurement score of the performance, defined Figure 2). as the geometric average of the sensitivity and specificity. This overall score equals zero if either sensitivity or specificity is zero; it equals 1 if both of them are 1; when sensitivity = specificity = x, the overall score equals x. These properties make it a good overall measurement of predictions. One sees that updating alignments improves the performance for both µ b = 0.1 and 0.4, and the improvement is more significant for the latter setting [cf. results of (B) and (C) in Table 1]. The reason is that the uncertainty in alignments for the cases with µ b = 0.4 is higher than that for µ b = 0.1, and thus updating alignments, which aims to average over different possible alignments, has a greater positive effect. Considering module structure shows an obvious improvement for µ b = 0.1, but it is only slightly better than running the motif mode for µ b = 0.4 [cf. (A) and (B) in Table 1]. We noticed that for µ b = 0.4, MultiModule found all the three motifs under both parameter settings [(A) and (B)] for five data sets, and the predictions in (A) with an overall score of 70% definitely outperformed that in (B) with an overall score of 58%. For the other five data sets, no motifs were identified in setting (A), but in setting (B) (motif mode) subsets of the motifs were still identified for some of the data sets. We suspect that this was caused by the slower convergence of MultiModule in setting (A), because of its higher model complexity, especially when the species are farther apart. One possible quick remedy of this is to use the output from setting (B) as initial values for setting (A), which will be a much better starting point for the posterior sampling. In all the simulation studies, motif widths were updated in the range of [6, 15] by a Metropolis-Hastings step (Section A.4). To illustrate the posterior inference of motif width in MultiModule, we report in Table 2 the histograms of the motif widths of the three TFs from a single run of 1000 Monte Carlo samples after burn-in for one of the simulated data sets with µ b = 0.1, where all the three motifs were unambiguously identified. The posterior probabilities were all concentrated on their respective true motif widths. On the other hand, we also report in this table the motif width distribution when MultiModule output a false motif (i.e., it was none of the three true motifs). One sees that, in this case, the motif width was mostly sampled as w = 6 and decayed very fast for w > 6. This was due to the fact that we restricted the motif width to be between 6 and 15. If we removed this restriction, the width would further decrease to smaller values, which would be a good indication that this motif might be spurious. 5. Applications to biological data sets. We tested MultiModule on two annotated data sets from mammals and fruit flies. Our computational predictions were compared to experimental validations reported previously. Detailed comparison with several published motif and module discovery methods was conducted based on these data sets. Hereafter we say that a predicted site is a correct prediction or that a predicted site overlaps an experimentally verified site if the starting position of the predicted site is within 3 bps to that of a verified site. This definition is used for assessing the performance of all the computational methods mentioned in this article. In the following examples, we set u = 0.2 to update ortholog alignments in MultiModule. 5.1. Muscle-specific genes in mammals. Our first test data set is the 24 skeleton-muscle-specific genes of human and mouse orthologs [Wasserman et al. (2000)]. We combined putative dog orthologs based on UCSC genome alignment (http://www.genome.ucsc.edu). The muscle-specific expression of these genes is controlled by five TFs, MEF, MYF, SP1, SRF and TEF, with 16, 25, 21, 14 and 7 experimentally validated binding sites in the human genes, respectively [Thompson et al. (2004)]. These binding sites form 24 validated modules. Here a validated module is defined as a sequence fragment containing at least two TFBSs satisfying the condition that the distance between any two neighboring sites is < 100 bps. The module boundaries are defined by the first and last TFBSs it contains. The total length of these modules is 2716 bps, distributed in 21 human genes. Upstream 3 kb of the human, mouse and dog orthologs were extracted and aligned by mLAGAN [Brudno et al. (2003)]. Based on these alignments, we calculated a conservation score (CS) for each sequence position. Using a threshold, we N-masked nonconserved bases which are more than 1 kb from the TSSs (transcription start site), that is, all the bases within 1 kb are kept irrespective of their CSs. The purpose is to detect promoter and conserved distal enhancer elements. Repeats were masked by "N" using a repeat-masking program (http://repeatmasker.org/). The preprocessing reduced our data set to an average effective length ("N"s are not counted) of 1604 bps per sequence. We applied MultiModule with K = 5 and L (expected module length) = 200 bps. Our pilot study suggests that running multiple short chains is more efficient for MultiModule (see supplemental notes). Thus, we ran the program 50 times independently and 1000 iterations each run. All the predicted motifs were ranked by their Bayesian scores [equation (4)], and the top five distinct motifs corresponded to the binding patterns of SRF, MEF, SP1 and MYF [ Figure 3(A) and (B)], and an AC-rich motif which seems to be a repetitive pattern. We note that MultiModule failed to discover any motifs close to that of TEF, mainly due to the fact that the TEF sites are not enriched enough in this data set (only seven sites in 24 sequences). For the other four known TFs, MultiModule predicted a total of 97 sites in the human genes from the top five ranking motifs, and 45 of them overlap corresponding validated sites. Thus, it achieves a sensitivity of 59% and a specificity of 46% for these four motifs ( Table 3). Note that the specificity is likely to be underestimated since our predictions may include some functional binding sites which have not been experimentally validated yet. The 50 runs contained at least two distinct modes in module composition, denoted by mode A and mode B. In mode A, MultiModule found five motifs corresponding to MEF, MYF, SP1, SRF and the AC-rich motif. In mode B, MultiModule found four motifs, including MYF, SP1, MEF and the AC-rich motif, with some validated SRF sites contained in the predicted MEF sites, that is, one motif was a mixture of MEF and SRF. One sees that these two motifs are both AT-rich in the middle [ Figure 3(A)]. Please note that it is possible that MultiModule output fewer motifs than the pre-specified K, when the posterior motif site probabilities P k of all sequence positions are <0.5 (the posterior probability threshold) for some k. We randomly selected one representative from each mode and checked their overlaps in base pair level with validated modules. Both modes predicted modules at a sensitivity of 65% and a specificity of 41% approximately (Table 4). We averaged posterior module probabilities over the 50 independent runs for each sequence position and used these average P m 's to generate our combined predictions with the top five predicted motifs. This combined prediction shows a significant improvement in specificity without loss of much sensitivity (Table 4). The Bayesian inference by marginal posterior probabilities is illustrated in Figure 3(C) using the gene TNNI1 as an example, which contains two known modules. One sees that (1) high peaks of P m emerge at the two known modules; (2) the posterior module and motif probabilities are coupled in conserved regions and thus show similar shape among the orthologs. We note that the bases in the combined-predicted modules showed a very high average P m and the P m values for 82% of them were >0.9 [ Figure 4(A)]. This implies that the module predictions were quite consistent among independent runs despite the slightly different motif composition. The average P a 's of predicted motifs and modules (Tables 3 and 4) were much higher than the overall average of all the sequence positions (58%), which indicates that functional elements are more likely to be located in aligned blocks. We observed that the background mutation rate µ b was significantly higher than that of TFBSs µ f [ Figure 4(B)]. From the respective definitions of µ b and µ f , one sees that, for a TFBS, the equivalent mutation rate comparable to the definition of µ b is < µ f . Thus, the observation that µ b > µ f convinces that even within aligned blocks, TFBSs still show a significantly lower evolutionary rate than their surrounding background nucleotides. NOTE: Tabulated are the numbers of validated TFBSs, predicted sites and overlapping sites (i.e., correct predictions) for the human sequences. We also include the total number of predicted sites in the mouse and dog sequences. * Among the predicted MEF sites, 10 of them overlap the predicted SRF sites, in which seven turn out to be experimentally validated SRF binding sites. Early developmental genes in Drosophila. Regulatory regions that control early body development in Drosophila melanogaster (Dm, hereafter) were identified in previous experimental studies [e.g., Berman et al. (2002)]. As a test of MultiModule, we extracted all the identified regulatory regions that interact with at least one of the three TFs, Bicoid (Bcd), Hunchback (Hb) and Krüppel (Kr), which form complex combinatorial patterns that regulate early developmental genes. These extracted regions form a data set of 26 Dm sequences. We further extracted orthologous regions in Drosophila pseudoobscura (Dp, hereafter) based on UCSC genome alignment and obtained 23 of them. Thus, our full data set contains 49 sequences with an average length of 1209 bps. In this data set, 34 functional binding sites in Dm with experimental validations are available based on TRANSFAC 9.1 release [Wingender et al. (2000)]: 12 Bcd sites in three sequences, 14 Hb sites in four sequences, and 8 Kr sites in two sequences. The genes with validated sites are referred to as validated gene set in this example. We applied MultiModule to this data set with K = 3 and L = 200 for 50 independent runs, each 1000 iterations. We ranked all the predicted motifs in multiple runs by the same Bayesian score [equation (4) Our predicted motif binding sites overlap substantially with validated ones for the three TFs with an overall sensitivity of 44% and specificity of 47% (Table 5). Some of our predicted Kr sites turned out to be validated Bcd sites, which is consistent with the fact that these two TFs actually bind to some overlapping functional sites. Such sites are not counted as correct predictions for Bcd in Table 5. Since the majority of the sequences in this data set are not annotated for TFBSs, we include in Table 5 the sum of squared distances (SSD) between the predicted and the known motif weight matrices as another quality measure of the predictions. These SSDs are approximately 1/10 of the expected SSDs between a random weight matrix and the known ones for the three TFs. We combined all the 50 independent runs to generate our combined predictions. In this way, we predicted 70 modules covering 16855 bps (Table 6): 86%, 49% and 30% of these modules contain the predicted Hb, Kr and Bcd binding sites, respectively. We used these combined predictions to define regulatory interactions among the maternal gene Bcd and the four gap genes in our data set, Hb, Kr, Gt and Kni, which are the most important TFs controlling early body patterning in Drosophila. Known regulatory interactions among these genes are available based on previously reported experiments [Sanchez and Thieffry (2001)]. In our simplified analysis, gene X is defined to be regulated by gene Y if the regulatory region of gene X contains a predicted module composed of at least one binding site for the TF encoded by gene Y. Remarkably, all our predicted interactions [ Figure 5(D)] are exactly identical to those known ones, and our analysis recovers all the known interactions with Bcd, Hb and Kr as regulators. In this data set, the predicted motifs and modules also showed higher conservation (Tables 5 and 6) compared with the overall average P a of 33%. NOTE: Predictions in validated gene set and in full gene set are tabulated. SSD is the sum of squared distance between a predicted and the corresponding known weight matrices. Other fields are defined similarly to those in Table 3. Validated genes in Dm refer to the genes with known TFBSs for the corresponding TFs. Exact binding sites are not available for the remaining genes although they are known to be controlled by these TFs. 5.3. Comparing with other methods. We compared the performance of MultiModule on the two data sets with that of AlignACE [Roth et al. (1998)], CompareProspector ], EMnEM [Moses, Chiang and Eisen (2004)] and CisModule [Zhou and Wong (2004)]. AlignACE finds multiple motifs using a repeatedly masking strategy. We ran the program under its default setting to search for motifs with different combinations of input parameters for motif width and expected number of sites. For these two data sets, AlignACE output 40 to 60 motifs for each run, and we repeated the program five times for each data set independently, which generated around 250 motifs. CompareProspector searches for motifs in one species with a given conservation score for each sequence position calculated based on a multiple alignment. We ran CompareProspector with each known motif width and a total of 300 independent runs for each data set. EMnEM takes as input one alignment for each ortholog group with a given phylogenetic tree. We input the alignments built by mLAGAN [Brudno et al. (2003)] and used a phylogenetic tree with branch length (in the unit of substitution per site) estimated by Xie et al. (2005) from mammalian upstream sequences for the first data set. We ran EMnEM with each known motif width w and set all the w-mers as initial consensus. For the above three methods, we defined representative output for each known motif by the highest score predicted motif (as reported by respective programs) that match the known pattern. CisModule is a single-species module discovery method. We ran it 50 times independently under the same parameters (K and L) as those used in MultiModule. The predicted motifs of CisModule were ranked by the same score function [equation (4)]. For all the methods, we used exactly the same sequences after preprocessing as described in the previous sections. The two test data sets in total contain 117 validated TFBSs for the eight known motifs. Table 7 summarizes the performance of each method on these data sets with respect to motif identification and site prediction. Multi-Module identified more known motifs than all the other methods. It also found much more validated TFBSs than the other methods did, with at least about 70% of improvement in sensitivity: MultiModule detected 60 validated TFBSs, while the other methods detected at most 35 validated sites. In addition, the specificity of MultiModule is the highest one among all the programs for these tests. This indicates that the high sensitivity of MultiModule does not come at the expense of specificity. In terms of overall performance, MultiModule shows 53% to 81% of improvement compared to the other four methods. We compare different methods based on their predictions with respect to the validated gene sets (i.e., genes with validated TFBSs) which are exactly the same as those used in the applications of MultiModule to these data sets. "ALnACE," "CompPr," "EMnEM," "CisMod" and "MltMod" refer to AlignACE, CompareProspector, EMnEM, CisModule and MultiModule, respectively. We report the number of known motifs each method identified. For correctly identified motifs, the numbers of predicted sites and correct predictions (# overlaps) are reported. Sensitivity is calculated by (# overlaps) / (total number of validated TFBSs = 117). Specificity is calculated by (# overlaps) / (# predicted sites). Overall score is defined as the geometric average of sensitivity and specificity. 6. Discussion. We have proposed and illustrated a new computational approach based on a coupled hidden Markov model for de novo discovery of CRMs and motifs in sequences from multiple species. Our simulation and test results convey three pieces of information about this approach. First, modeling sequence orthology provides more information than using a conservation score or simply pooling sequences from multiple species into a heterogeneous data set as illustrated from the comparison with other methods. Second, the use of module structure to identify clusters of motif patterns is usually more powerful than identifying each motif independently. Third, updating multiple alignments improves the sensitivity of motif prediction. From this study, we observe that for species within mammals or within Drosophila, aligned regions are usually much longer than the width of TFBSs, and TFBSs definitely show lower mutation rates compared with aligned background nucleotides as shown in Figure 4(B), which is also true for the Drosophila data set. Since MultiModule samples from a complicated joint probability distribution by a Gibbs sampler, the problem of multimodality needs to be considered. We observe that it is helpful to integrate out weight matrices and other parameters even in an approximate sense (Section A.6) for the convergence of MultiModule. To further alleviate the possibility of local traps, we combine samples from multiple randomly initialized chains to construct superior estimates in practice. An alternative approach to this end is the use of more sophisticated Monte Carlo algorithms to handle multimodality, such as the parallel tempering [Geyer (1991)] or the equi-energy sampler [Kou, Zhou and Wong (2006)]. The computational complexity of MultiModule is approximately proportional to 2 N KL, where N is the number of species, K is the number of TFs, and L is the total length of sequences, which is scalable with a reasonable selection of orthologous species. It is worth mentioning that the complexity of the motif mode of MultiModule is linear in N . The use of c-HMM is not restricted to de novo discovery. With given PWMs and other parameters, MultiModule can be used to scan for modules in ortholog groups. From our experience, for de novo motif finding, Multi-Module is suitable to handle sequences with an average length less than 2 kb. If the average search region is much larger, some preprocessing is needed to reduce it to save the computational cost and to increase the motif site enrichment. This was the reason why we removed the nonconserved bases in the first application of the muscle-specific genes. For the current implementation of MultiModule, we assume that the number of motifs K is fixed and known. But in real module discovery applications, the exact value of K is usually unknown. It would be desirable to develop a coherent approach to estimate this number simultaneously. However, up to now, we do not have an efficient method to complete this task. The dimensionality of the parameter space is determined by K. With all kinds of missing data in our model, it is impossible to integrate out both the parameters and the missing data to obtain the marginal distribution of K. Thus, we decide to fix K for the current implementation. When K is set to be smaller than the true number of motifs, the program usually finds subsets of the motifs that are highly enriched (some repetitive patterns may be included as well, such as the AC rich pattern in the first application). When K is set to be greater than the true value, the program may output fewer than K motifs, as we pointed out in Section 5.1. Our suggestion is to apply some pilot runs of MultiModule with different K in a reasonable range, say, between 2 and 6, and check how many distinct motifs it actually outputs. Then one may make a decision on the suitable value of K and perform multiple runs to generate final predictions. In our experience, this ad hoc approach is usually acceptable for most applications. In this article the module structure is modeled by a one-step Markov chain with two states, which specifies a geometric distribution on the length of a module. This is definitely a simple approximation to the biological reality, which only captures the co-localization of multiple motif sites, that is, the model puts some constraints on the possible locations of the motif sites in the same module. Our current approach should be viewed as a very first step to incorporate module and phylogenetic structures into de novo motif discovery. There exists substantial room for further development in the direction. One may extend the HMM used in this article to a higher-order Markov chain such that the transition probability to a motif depends on the previous motif in the module. This allows the model to estimate possible synergistic interactions between neighboring motifs. Other refinement of the model, such as more comprehensive phylogenetic tree topology and more sophisticated background model, is expected to enhance the utility of this approach. However, with the increasing model complexity, the computational efficiency and robustness of the statistical inference based on posterior sampling will be more challenging. APPENDIX: THE GIBBS SAMPLER FOR THE C-HMM Suppose the data set of interest contains sequences of n genes from N species, S (m) i (i = 1, 2, . . . , n, m = 1, 2, . . . , N ). Without loss of generality, we assume n = 1, since the sampling procedure for the ith ortholog group is similar for all i. Thus, we simply denote the input sequences as for the current species (m = 1, 2, . . . , N ) before the Gibbs sampling iteration, and thus effectively assume that they are given. Let W = [w 0 , w 1 , . . . , w K ] be the widths of the background model (w 0 = 1) and the motifs (i.e., Θ k is a 4 × w k matrix). The transition probability matrix T is defined in (1), and we denote q = [q 0 , q 1 , . . . , q K ]. The neutral evolution of background nucleotides is characterized by the parameter vector (3)]. The probability of breaking an evolutionary bond between any base of an aligned TFBS and that of its ancestral site is µ f . Denote the hidden states by Y , which indicate whether the observed nucleotides are located in a background (B) or module (M ) region. For those in a module, the hidden states Y also specify whether they are within-module background (M 0 ) or motif sites (M 1 to M K ). Thus, the hidden states imply the locations of modules and motif sites. We further use A to denote the multiple alignment of S (1) , . . . , S (N ) , Z to denote the ancestral sequence and V to denote the evolutionary bonds of aligned TFBSs. Conceptually, we treat A, Y, Z and V as missing data and denote them by D mis = [A, Y, Z, V ]. All the parameters are denoted by Ψ = [W, Θ, T, q, φ b , µ f ]. A.1. Prior and posterior distributions. MultiModule takes expected module length L and the number of motifs (TFs) K as input and fixes the transition probability from a module state to a background state t = 1/L. We put independent Poisson priors with mean λ = 10 for motif widths. Flat Dirichlet (Beta) priors are prescribed to all the other parameters. More specifically, we use a flat product Dirichlet of dimension 4 × w k as the prior distribution for Θ k (k = 1, 2, . . . , K) (i.e., the parameter for this product Dirichlet distribution is a 4 × w k matrix with all elements = 1). We put four-, (K + 1)and three-dimensional flat Dirichlet priors on θ (0) 0 , q and φ b , respectively. The prior distributions for r and µ f are both Beta(1, 1). With these prior distributions specified, one can write down the joint posterior distribution of all the parameters and missing data, where π(Ψ) denotes the joint prior distribution. Of interest are the locations of motif sites and modules, that is, the hidden states Y . One wants to perform inference based on the marginal posterior distribution of Y given all the sequence data S, where all the other unknown variables are marginalized out. However, the integral in (6) has no analytical solution, and thus, we devise a Gibbs sampling approach to generate samples from the joint distribution [equation (5)]. Based on these samples, one can easily construct empirical marginal posterior distributions of interesting variables (Y in this case). Our Gibbs sampler contains three steps: (1) sample A given Ψ; (2) sample [Y, Z, V ] jointly given A and Ψ; (3) sample Ψ given [Y, Z, V ] and A. To simplify the description, we first discuss how to sample from the conditional distributions assuming that the alignment A is given. Then we discuss how to update A by one more conditional sampling step by a Metropolis-Hastings algorithm. We also describe how to integrate out all the parameters Ψ in (5) in an approximate manner, so that we are effectively sampling from P (D mis |S). We find this implementation of a collapsed Gibbs sampler [Liu (1994)] improves the convergence of MultiModule. A.2. Path representation of the c-HMM. Any multiple alignment of the orthologs S = [S (1) , . . . , S (N ) ] can be viewed as a path in the N -dimensional space from (0, 0, . . . , 0) to (L 1 , L 2 , . . . , L N ), with L m the length of S (m) (m = 1, 2, . . . , N ). The path is composed of a series of N -dimensional points, and the coordinates of each point are the last visited positions of these N sequences. Figure 6 shows the map from a multiple sequence alignment to a path in 3-D space, which visits all the sequence positions in a unique order. This is a natural generalization of the 2-D path representation of a pair-wise alignment. Suppose the current alignment path A = a 1 a 2 · · · a L 1, 2, . . . , N ). The following constraints are implied by the definition of the c-HMM: Constraint (i) says that any aligned nucleotides share the same hidden state (the coupled state). Constraint (ii) is due to the fact that if m / ∈ EC(d), which implies that c d−1 actually refer to the same nucleotide in S (m) and thus have the same hidden state. For example, in Figure 6, the third coordinates of the points in sub-path E are identical, and they all refer to position 170 of S (3) . Constraint (iii) is consistent with the definition that a motif binding site is treated as one state (M k ) in our model. We define change points of the path by which are the points where the alignment path changes its direction in the N -dimensional space. For instance, all the points shown in Figure 6(B) are change points except the starting and ending points. By this path representation of an alignment, we effectively augment the hidden state to an N -dimensional vector such that the model reduces to a Markov chain in the augmented state space. Then one may use the Markovian property to derive recursive algorithms to sample the hidden states exactly given parameter values and an alignment. + 1), . . . , X(j)] for any two points a i and a j (j ≥ i) in the path, especially, (7): where Tr(y H |y, r, t) specifies the transition probability from y to y H , and P (X(d)|θ 0 , φ b ) and P (X[d − w k + 1, d]|Θ k , µ f , φ b ) are the marginal probabilities of emitting X(d) and X[d − w k + 1, d], given that the current state is B and M k (k = 0, 1, . . . , K), respectively. Recall that M k is the decomposed state of M , which indicates whether the current hidden state is withinmodule background (M 0 ) or one of the K motifs (M 1 to M K ). Since a TFBS is treated as one state as a whole, no change points are allowed in the interval [d − w k + 1, d] for k = 1, . . . , K in the summation of (9). In other words, motif sites are not allowed to be located across any change points of the alignment. We calculate the transition probabilities in (8) and (9) If Y (d) = [M k , M k , M ] (k = 0, 1, . . . , K), we will consider the point (d − w k ) in a similar way given that (d − w k ) ∈ sub-path E. If |EC(d)| ≥ 2, the current point a d emits a group of aligned nucleotides. In order to calculate the marginal emission probabilities, one needs to sum over all possible ancestral nucleotides. More specifically, where θ (0) 0 (z) is the probability of observing nucleotide z under the ancestral background model, and Φ b is the neutral substitution matrix defined in (3). For k = 1, 2, . . . , K, where Θ ki is the weight vector at the ith position of motif k, that is, the ith column of Θ k , and P (x (12) reduces to equation (11) for within-module background. If |EC(d)| = 1, the calculation reduces to single-species situations, such as in CisModule [Zhou and Wong (2004)]. Using the recursions of equations (8) and (9) where all the hidden states Y , ancestral nucleotides Z and evolutionary bounds V are summed over. Based on the partial summations calculated in equations (8) and (9) d ] of a d with probability proportional to f d (Y (d))Tr (Y (d + 1)|Y (d), r, t) subject to constraints (i) and (ii) of the c-HMM. If the emission components of a d are sampled as background, we move to (d − 1) and repeat to sample Y (d − 1). If the emission components of a d are sampled as state M , we further sample the motif type (i.e., the decomposed states M 0 , M 1 , . . . , M K ) with probabilities proportional to the K +1 terms of f d (Y (d)) in (9) for k ∈ {0, 1, . . . , K}. Given the imputed value of Y (d), we set Y (d − w k + i) = Y (d) for i = 1, . . . , w k − 1 following constraints (iii) and (ii). Then we move to the point (d − w k ) and repeat the sampling procedure for Y (d − w k ). If |EC(d)| ≥ 2, we also sample associated ancestral nucleotides and evolutionary bonds according to the calculations in equation (11) to equation (13). Suppose the current emission components are background nucleotides (B or M 0 ). We sample the ancestral nucleotide Z d with probability for z ∈ {A, C, G, T }. If the current emission components are binding sites of motif k, we sample each base of the ancestral binding site Z d−w k +1 · · · Z d independently with probabilities (13). Given the ancestral binding site, we update the evolutionary bond between the ancestral and current binding sites for each base independently. Denote the evolutionary bond between x (m) . We connect the bond with probability otherwise the bond will be broken. A.4. Sample Ψ given [Y, Z, V ] and A. Let us return to the graphical representation of the c-HMM as illustrated in Figure 1(B). In this conditional sampling step, the hidden states, the ancestral nucleotides of coupled nodes and the evolutionary bonds associated with aligned TFBSs are given. We sample r from Beta(C BM + 1, C BB + 1), where C BM and C BB are the numbers of transitions from B to M and from B to B, respectively. Denote the numbers of states B, M and M k by |B|, |M | and |M k | for k = 0, 1, . . . , K, respectively (|M | = K k=0 |M k |). The conditional posterior distribution of q = [q 0 , q 1 , . . . , q K ] is Dir (|M 0 | + 1, . . . , |M K | + 1). We update the ancestral background distribution θ B is the count vector of the imputed ancestral background nucleotides and 1 is a vector of 1's of length 4. Denote by C i , C s and C v the numbers of identities, transitions and transversions from the ancestral to the current aligned background nucleotides, respectively. Then we sample φ b = [1 − µ b , α, 2β] from Dir (C i + 1, C s + 1, C v + 1). For all the aligned TFBSs with their imputed ancestors, we count the numbers of broken and connected evolutionary bonds, |V 0 | and |V 1 |, respectively, and sample µ f from Beta(|V 0 | + 1, |V 1 | + 1). The sufficient statistic for Θ ki (k = 1, . . . , K; i = 1, . . . , w k ) has three components, (1) the count vector of unaligned current sites, C g ki , (2) the count vector of ancestral sites, C z ki , and (3) the count vector of aligned descendant sites with a broken evolutionary bond, C b ki , since each of them is an independent sample from Θ ki under our model. Then the conditional posterior distribution of Θ ki is Dir (C ki + 1), where C ki = C g ki + C z ki + C b ki . A Metropolis-Hastings step is implemented to update motif widths. We illustrate our method by one motif as an example. Given the current width w and all sampled sites of this motif, we propose to increase or decrease one base at their left or right ends with equal probability. After choosing one of the four possibilities, the problem is equivalent to a model selection problem: The nucleotides observed in the selected positions are generated from the background (H 0 ) or from a motif column (H 1 ). If H 1 is true, denote the weight vector of the motif column by Θ w and its sufficient statistic by C w calculated as described in the previous paragraph for any Θ ki . Before calculating C w , one needs to sample the associated evolutionary bonds V w , with |V w 0 | and |V w 1 | denoting the numbers of broken and connected bonds. Under H 0 , we denote by C b = [c π(H 1 ) π(H 0 ) where we define R C = i,j R C ij ij and Γ(R) = i,j Γ(R ij ) with R and C being matrices (vectors) of same size, |C w | is the total counts in C w , and the ratio of π(H 1 ) over π(H 0 ) is determined by the Poisson prior for the motif width. In each iteration, we always propose to flip the two hypotheses. The proposal from H 0 to H 1 involves proposing evolutionary bonds for all the aligned nucleotides that are identical to their ancestors. We propose to break each of these bonds independently with probability µ f . Under these proposals, where n s is the number of aligned nucleotides that are different from their ancestors, and Q stands for the proposal probabilities. From the definition of an evolutionary bond, we always have n s ≤ |V w 0 |. A.5. Sample A given Ψ. To consider the uncertainty in multiple alignments, we update A by its marginal posterior distribution given the current parameters, that is, we want to sample from P (A|Ψ, S). A Metropolis-Hastings step is implemented for this conditional sampling step. Suppose the current alignment is A. We propose a new alignment A * from an ordinary multiple sequence alignment procedure based on an HMM [Baldi et al. (1994), Krogh et al. (1994)], denoted by MA-HMM so as to distinguish from c-HMM. In MA-HMM, each sequence is aligned to a profile (or a hidden sequence template) based on an HMM with three states, aligned, insertion and deletion. In our proposal, the transition matrix between the three states is fixed by prior expectations as where states are ordered as deletion, insertion and aligned. The rationale for selecting these values is that we expect to start an aligned block every 500 bps (D 11 = D 22 = 0.998) and that the average length of an aligned block is 20 bps (D 33 = 0.95). These values serve as the default parameters for all the results presented in this article. We use the current neutral substitution matrix as the emission probabilities from the profile (ancestral sequence Z) to a current aligned nucleotide. Ancestral and unaligned nucleotides are emitted from their respective background models θ where P (S|A, Ψ) and P (S|A * , Ψ) are calculated by (14) through the recursive forward summations. Q(S, Z|A, θ 0 , Φ b ) and Q(S, Z|A * , θ 0 , Φ b ) are the probabilities of observing S and Z given alignments A and A * under the MA-HMM, respectively, that is, where c (m) b and C t are defined similarly to those in (15) but for all the sequence positions. Note that the prior probabilities of an alignment are identical in both c-HMM and MA-HMM, and thus are canceled at the R.H.S. of equation (17). A.6. A collapsed sampler. Suppose the data set contains n genes. Denote the missing data, including Y, Z, A and V , for the ith gene by D mis,i , and let S i = {S (m) i } N m=1 be the orthologous sequences of the ith gene, i = 1, 2, . . . , n. Since collapsing random components in the Gibbs sampler usually results in more efficient sampling schemes [Liu, Wong and Kong (1994), Liu (1994)], we implement a collapsed MultiModule to sample from P (D mis,1 , . . . , D mis,n |S) by iteratively scanning each gene. For the ith gene, given the current imputed values of Equation (19) can be easily calculated from the conditional posterior distributions of different parameters (Section A.4), and equation (20) is exactly the same sampling procedure as described in Sections A.3 and A.5 takinĝ Ψ [−i] as the current parameters. We want to emphasize that, although this is only an approximate way to collapse all the parameters, it indeed improves the convergence of the Gibbs sampler. The simulation studies were performed with this collapsed version of MultiModule.
Mechanical dependency of the SARS-CoV-2 virus and the renin-angiotensin-aldosterone (RAAS) axis: a possible new threat Pathogens in our environment can act as agents capable of inflicting severe human diseases. Among them, the SARS-CoV-2 virus has recently plagued the globe and paralyzed the functioning of ordinary human life. The virus enters the cell through the angiotensin-converting enzyme-2 (ACE-2) receptor, an integral part of the renin-angiotensin system (RAAS). Reports on hypertension and its relation to the modulation of the RAAS are generating interest in the scientific community. This short review focuses on the SARS-CoV-2 infection’s direct and indirect effects on our body through modulation of the RAAS axis. A patient having severe acute respiratory syndrome-coronavirus-2 (SARS-CoV-2) infection, which causes COVID-19 relates to hypertension as a pre-existing disease or develops it in a post-COVID scenario. Several studies on how SARS-CoV-2 modulates the RAAS axis indicate that it alters our body’s physiological balance. This review seeks to establish a hypothesis on the mechanical dependency of SARS-CoV-2 and RAAS modulation in the human body. This study intends to impart ideas on drug development and designing by targeting the modulation of the RAAS axis to inactivate the pathogenicity of the SARS-CoV-2 virus. A systematic hypothesis can severely attenuate the pathogenicity of the dreadful viruses of the future. Introduction The outbreak of COVID-19 due to severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) has affected millions of people globally, wherein about 3.5 million died due to this outbreak (Dyer 2021;Woolf et al. 2021). However, recent data have suggested that the actual global mortality due to COVID infection is almost 6.9 million (Dyer 2021). Pathogens like SARS-CoV-2 lead to fatal outcomes, especially in immunocompromised patients infected with the virus (Belsky et al. 2021). Differences in mortality rates among different age groups of the population can be linked with the differential status of the endocrine system, behavioral system, and physiological conditions related to the age groups of the infected human population (Undurraga et al. 2021). The pathogenicity of the coronavirus is attributed to the high affinity of the spike protein that binds with the angiotensin-converting enzyme (ACE)-2 receptors, particularly in the lungs. It is known that ACE2 receptors act as an integral part of an axis termed as the renin-angiotensinaldosterone system (RAAS) (Lavoie and Sigmund 2003;Tikellis and Thomas 2012). There is, therefore, a possibility of modulation of such an essential endocrine axis after COVID infection. Several studies have been made on the impact of COVID on the RAAS axis. However, the current data are scattered in a discrete pattern. In this review, we attempt to summarize the effects of SARS-CoV-2 on the RAAS axis. Also, we have linked the impact of SARS-CoV-2 on RAAS-mediated crosstalk signaling processes such as hypertension and behavioral alteration. Effect of SARS-CoV-2 on angiotensin The RAAS system has a crucial role in controlling the retention of salts and water in the circulatory system via renal reabsorption, thus regulating blood pressure (Vaduganathan et al. 2020). Angiotensin, specifically Ang II, is the main contributor to the RAAS system by stimulating aldosterone secretion, the main end product of the glomerulosa layer, in the adrenal cortex. Membrane-bound aminopeptidase ACE2 is a regulatory enzyme that cleaves the active form of both angiotensin I and angiotensin II into angiotensin 1-9 and angiotensin 1-7, respectively (Kuba et al. 2010;Strawn et al. 1999;Ye et al. 2006). ACE2 maintains the physiological equilibrium between the vasoconstrictor effect of angiotensin II and thus of angiotensin 1-7, which lowers blood pressure by dilating the blood vessels ( Fig. 1) (Burrell et al. 2004). However, its infection by SARS-CoV-2 leads to the disruption of the homeostatic environment devised by RAAS. Measuring blood levels of ACE2 after prospective validation is now known to provide a risk-stratification opportunity, leading to the identification of individuals who are at greater risk of infection or are susceptible to experiencing severe medical complications. An opportunistic approach to protect against the SARS-CoV-2 disease may be possible by targeting the ACE2 system. For an individual patient, this may aid in monitoring responses to preventive measures and treatment interventions. Thus, focusing on the potential therapeutic strategy enabled by targeting ACE2 is especially important. Autopsies of deceased COVID-19 patients show a remarkable decrease in ACE2 expression (Oudit et al. 2009;Chaudhry et al. 2020) along with severe lung injuries (He et al. 2006), due to inhibition of SARS-CoV-2. Further investigations show a correlation between elevated plasma angiotensin II levels and lung injury along with viral load in severely infected patients Miesbach 2020), indicating that ACE2 downregulation promotes angiotensin II. The binding of angiotensin II with AT1 receptors can lead to enhanced inflammation, vasoconstrictors, and thrombosis. On the other hand, angiotensin III is also called as Ang-(2-8). Aminopeptidase A (APA) cleaves the Asp1-Arg2 bond in Ang II and coverts it into Ang III. The major function of Ang III is to regulate hypertension and vasopressin release (Reaux et al. 2001). AT2R is another important factor in RAAS regulation as both angiotensin II and angiotensin III are modulated after SARS-CoV-2 infection. Though AT2R has a significant role in the vasodilation of vascular epithelium AT2R, it lacks any impact in RAAS regulation while Na + excretion through the kidneys increases (Sumners et al. 2015). Angiotensin III might positively affect Na + reabsorption indirectly by interacting with AT2R in the zona pellucida of the adrenals, but the concomitance of the vasodepressor is seen when synthetic {beta-Pro (7)} angiotensin III is introduced (Del Borgo et al. 2015). It was earlier reported, due to virus infection, that ACE2 downregulation creates local imbalance between the RAS and ACE2/angiotensin-(1-7)/MAS axis. This might directly lead to severe organ injury. Therefore, the balance between angiotensin I and II, and 1-7 and 1-9, rather than each one alone, may be the main determinant of dysfunctions related to the RAAS (Ni W et al. 2020). As a whole, SARS-CoV-2infection-induced modulation in angiotensin and its related receptor expression leads to several altered physiological conditions in the human body. Possible partial effect of SARS-CoV-2 on aldosterone SARS-CoV-2 infection upregulates angiotensin II after binding with the ACE2 receptor (Oudit et al. 2009;Chaudhry et al. 2020) as discussed earlier. On the other hand, angiotensin II upregulates aldosterone secretion (Patel et al. 2017). Therefore, after SARS-CoV-2 infection, there might be a possibility for the elevation of the aldosterone level. However, the reports supporting this are few, but some recent workers have found a positive correlation between upregulated aldosterone levels in COVID-19 patients' bodies (Villard et al. 2020). Here, ACE2 means angiotensinconverting enzyme 2, AT1R means angiotensin type 1 receptor and AT2R means angiotensin type 2 receptor, AKKY C1 means ankyrin C1 protein, CK1 means casein kinase 1, and ENaC means epithelial sodium channel receptor Mineralocorticoids provoke the epithelial sodium channel (ENaC), also called amiloride-sensitive sodium channel, a hetero-trimeric ion channel selectively permeable to sodium ion targeting in the principal cell apical surface (Noreng et al. 2018). Aldosterone facilitates ENaC function through mediator protein families of casein kinase (CK), ankyrin G (Klemens et al. 2017), and even circadian rhythm controlling period 1 protein (Gumz et al. 2009). Casein kinase 1 delta/ epsilon, a subtype of CK protein, though the mechanism is still unclear, triggers ENaC-alpha expression (Yan et al. 2007), i.e., it may act upon the transcriptional level of this protein expression. In several studies, CK1 DELTA/ EPSILON blockage checks ENaC mRNA expression by restricting PER-1 nuclear entry (Richards et al. 2012). Therefore, a SARS-CoV-2-mediated disruption of the RAAS system not only affects the respiratory and cardiovascular systems but also might trigger the regulation of the circadian cycle. This behavioral alteration might be taken into consideration for COVID-19 patients due to elevated aldosterone. Possible partial effect of SARS-CoV-2 on NO activity and hypertension Nitric oxide (NO) is a potent vasodilator; it inhibits the vasoconstriction effects on blood vessels of angiotensin II (Richards et al. 2012). NO not only leads to the reduction of blood pressure by inhibiting the AT1R in the vascular epithelium (Savoia et al. 2020) but also may drive angiotensin II and elevates the interaction of angiotensin II-AT2R. It facilitates the degree of lowering of blood pressure. If administered at the early stages of the infection, NO might play its role in restricting the virus binding to the AT1R in the lungs by hindering AT1R endocytosis of SARS-CoV-2 ( Fig. 2). Modulation of RAAS in other pathological conditions linked with SARS-CoV-2 infection RAAS is modulated not only in COVID-19 patients but also in several pathological conditions like obesity, diabetes, inflammations, renal disorders, and others, which might be linked with accentuating the effects of SARS-CoV-2 infection and creating comorbidities and asymptomatic disease in the human population. Obesity is yet another critical factor for disrupting the RAAS axis and elevating the renin (Kalil and Haynes 2012) and aldosterone secretion from the adrenal gland (Peminda et al. 2017;Yang et al. 2021). An adipokine named leptin upregulates renin via sympathetic activation through the CNS (brain stem and hypothalamus) (Peminda et al. 2017;Hall et al. 2010). Along with renin, angiotensin II activity is enhanced either by inhibiting ACE2 (Patel et al. 2016;Soler et al. 2013;Kawabe et al. 2019) or facilitating the conversion of angiotensin I to angiotensin II by cathepsins in adipose tissue (Schütten et al. 2017). This deregulated renin/ angiotensin/aldosterone axis that pre-exists in obese individuals may aggravate COVID-19. Meta-analytic data shows that the susceptibility to infection of COVID-19 increases a few folds in obese individuals than in non-obese individuals, but also obesity aggravates the severity of the condition (Yang et al. 2021;Hall et al. 2015;Zhang et al. 2021) along with it becoming a strong reason for comorbidities. On the other hand, obesity is strongly associated as a critical feature of diabetes in which high glucose levels induce the release of renin, as reported by Toma et al. (2008). Hyperactivation of the RAAS axis can therefore be observed in diabetic patients (Ribeiro-Oliveira Jr et al. 2008). Hyperactivation of the RAAS axis could consequently help in the invasion of SARS-CoV-2. This observation is supported by some recent data, which shows that diabetic patients are more prone to SARS-CoV-2 infectivity and mortality (Feldman et al. 2020). Besides this, in diabetic patients, monocyte and macrophage activation occur, which creates an elevation of proinflammatory cytokines and chemokines like TNF α, IL6, IL8, and others (Kurihara et al. 2012). The heightened proinflammatory cytokines and chemokines create inflammation in the body that might play a vital role in SARS-CoV-2 infection, especially in the generation of asymptomatic disease (Xie et al. 2021). Among the renal abnormalities associated with COVID-19 reports include proteinuria, hematuria, and acute kidney injury. SARS-CoV-2 can infect podocytes, and tubular epithelial cells, contributing to the aforementioned renal abnormalities. The renal abnormalities associated with COVID-19 are related to the rise in the complex multifactorial pathophysiology involving the following: (i) a local disruption in RAAS homeostasis, (ii) a direct cytopathic effect of the virus, and a systemic inflammatory response to infection (Martinez-Rojas et al. 2020). ACE2 supports renal integrity and function through the enzymatic production of angiotensin 1-7 (Ang1-7). Widely expressed ACE2 in proximal epithelial cells, smooth muscle cells, vascular endothelial cells, and podocytes acts as an anti-inflammatory, antifibrotic, vasodilatory, and diuretic/natriuretic agent via activation of the Mas receptor axis. Upon disruption of these activities in the kidneys by ACE2, potential threats of renal damage leading to a high incidence of acute kidney injury (AKI) among SARS-CoV-2 patients are reported (Armaly et al. 2021). The exogenous administration of Ang (1-7) is considered an appealing therapeutic option, given the benefits of ACE2/Ang1-7, including attenuation of inflammation, vasodilation, diuresis, apoptosis, natriuresis, oxidative stress, coagulation, and cell proliferation, as well as the high incidence of AKI in these ACE2-depleted disorders (Martinez-Rojas et al. 2020;Armaly et al. 2021). Drugs that are capable of addressing the alteration in the RAAS axis Many drugs have been administered to block the RAAS axis. Some of these drugs are aldosterone receptor antagonist, angiotensin-converting enzyme inhibitors (ACEi), sodium channel blocker, and potassium-sparing channel blocker, among others (Table 1). However, due to the COVID outbreak, the main drug target area has also been modulated so that the pathogen can thrive in the human body and serve its detrimental effect by altering the RAAS axis. Possible drug target area Prompted by the fact that elderly patients with cardiovascular comorbidities have been gravely affected by the severe forms of SARS-CoV-2, interpretations based on retrospective observational studies about the influence of chronic treatment with drugs that are blockers of RAAS are ongoing. However, these retrospective interpretations should be published with caution and only evidence-based data on the impact of RAASinterfering medications in patients and the general population should be published. Under in vivo conditions, reports on 15 classes of drugs in increasing ACE2 levels and a reanalysis of clinical data available from literature from a meta-analysis of 9 studies have shown that an increased risk of mortality is not connected with the usage of ACEIs/ARBs (Akhtar et al. 2020;Kai and Kai 2020;Yehualashet and Belachew 2020). Though research on some animal sources implies enhancing ACE2 expression or activity by drugs against hypertension, data obtained on humans show a contradiction of increased expression of the transmembrane ACE2 protein in the lung brought about by anti-hypertension medications. Cardiovascular conditions can influence the expression of ACE2 in humans, which is independent of RAAS-blockade strategies of treatment (Kai and Kai 2020;Gheblawi et al. 2020;Yehualashet and Belachew 2020). A change in the use of angiotensinconverting enzyme inhibitors (ACEIs) or angiotensin receptor blockers (ARBs) is not required to address the management of elevated blood pressure in the treatment of COVID-19 infection (Gressens et al. 2021). Though it is controversial whether regular usage of these drugs affects ACE2 expression or not, studies showed chronic hypertensive patients tend to have severe symptoms, and hypertension remained one of the main factors for comorbidity (Clark et al. 2021). There is still no evidence for the involvement of elevated ACE2 in plasma levels before infection due to the daily dosage of ARBs and AREs (Ni et al. 2020). Studies show varied responses in patients to ACE2-modulating drugs. Thus, the therapeutic application of such drugs also varied; most of them prescribed persistence of medication. Alternative medication had been devised in cases where AREIs and ARBs show adverse effects (Jarari et al. 2016). Detailed studies on the impact of antihypertensive drugs on ACE2 needed to be done, shedding light on the mechanisms and critical details affecting hypertension. Since more severe hyper-mutative variants of the COVID-19 virus are being identified, alternative approaches exploring the modulation of the specific downstream pathophysiologic effects caused by a virus that leads to morbidity and mortality are being researched (Gressens et al. 2021). Opportunities to understand the various aspects of RAAS inhibitors to alleviate (Kleyman 1988, Vandenbeuch 2020). indirect viral-induced lung and other organ injuries are getting attention (Ingraham et al. 2020). Patients with cardiovascular comorbidities are often administered RAAS blockers. The degree of ACE2 expression in different age groups combating severe infection of the virus and mortality is being hinted at being directly related to the incidence and severity of COVID-19 virulence. The benefits or risks of pharmacologic modification of the RAAS-SCoV-axis by drugs that are possible angiotensin-converting enzyme inhibitors (ACEIs) or angiotensin receptor blockers (ARBs) are still not very clear (Gressens et al. 2021). The possibility that these drugs may facilitate viral cell entry has fueled controversies since the expression of ACE2 may be increased by RAAS blockers, yet by degrading angiotensin II into angiotensin, ACE2 functions as a counter-regulator of the RAAS (Gressens et al. 2021). While the former has led to concerns that such modulations may aggravate and worsen the condition of the patients, the latter may mediate beneficial effects in COVID-19. The contemporary experimental models through relevant preclinical approaches favor a protective outcome of RAAS-CoV-axis inhibition on both lung injury and survival. But there are limitations in clinical data related to the role of RAAS modulation in the setting of SARS-CoV-2. A clinical equipoise regarding the efficacy of RAAS-based interventions and the imminent need for a multisite randomized controlled clinical trial to evaluate the inhibition of the RAAS-SARS-CoV-2 axis on acute lung injury in COVID-19 has been proposed (Ingraham et al. 2020). Based on viral microbiology, the target of the various proposed interventions for SARS-CoV-2 and the inhibition of viral cellular injury have been proposed (Ingraham et al. 2020). There is evidence that there may be a relation between ACE2 and the differences in incidence and severity of COVID-19 infection (Kaseb et al. 2021). The prevalence and severity of COVID-19 among the vulnerable groups of patients having age-related comorbidities with established high levels of ACE2 expression establish its candidacy as a potential therapeutic target. Evidence supports the idea that differences in the incidence and severity of COVID-19 infection may be related to ACE2 (Kaseb et al. 2021). The prevalence and severity of SARS-CoV-2 among age-or gender-, or ethnicity-related comorbidity, with established high levels of ACE2 expression, strongly support this inference. The burden of COVID-19 infection in these vulnerable groups added to the impact of the potential therapeutic and preventive measures as a result of adopting ACE2-driven anti-viral strategies; the expedition of a global approach to control the severe infection and mortality of the COVID-19 pandemic may be possible (Kaseb et al. 2021). Though in-depth clinical and mechanistic investigations are still ongoing, literature is indicative of the safety in the usage of ACEIs/ARBs, though, in severe COVID-19 patients, there may be an increased risk of renal injury (Akhtar et al. 2020). Internalization of ACE2 by SARS-CoV-2 upon entry into the target cell likely reduces cell-surface ACE2 levels, thus translating into (i) the downregulation of Ang-(1-7), (ii) causing unopposed Ang II accumulation, and (iii) RAAS activation promotion (Kaseb et al. 2021). ACE inhibitors and Ang II-receptor blockers as RAAS inhibitors serve as potential therapeutic strategies to prevent SARS-CoV-2 infection. Other options include modifying ACE2 levels or activity in the target cells. As such, the therapeutic approaches for achieving the mentioned options are achievable by blocking spike-protein priming by employing TMPRSS2 inhibitors, slowing the viral entry into target cells by using soluble recombinant ACE2 to competitively bind with the COVID-19 virus serving as a virus trap and inactivator, and developing a vaccine targeting the spike protein of SARS-CoV-2 (Kaseb et al. 2021;Ferrario et al. 2005;Akhtar et al. 2020). ACE2 is a membrane-bound aminopeptidase. The composition of ACE2 is attributed to a carboxymonopeptidase that prefers hydrolysis between proline and carboxy-terminal hydrophobic residues that is found both as a membraneassociated and as a secreted enzyme in cardiovascular, neuronal, and reproductive organs (Ferrario et al. 2005). Angiotensin I and angiotensin II are cleaved into the angiotensin-(1-9) and angiotensin-(1-7) peptides by ACE2 (Mourad and Levy 2020). ACE2 is overexpressed in heart failure, arterial hypertension, and diabetes mellitus. The existence of a cardiovascularprotective ACE2-angiotensin-(1-7)-Mas receptor axis is supported in several studies (Ferrario et al. 2005). Activation of ACE2 is known to modulate the host and support its replication. Investigations about the role of ACE2 in activating the immune signals on SARS-CoV-2 attachments have prompted the construction of the host regulatory network upon the viral attachment to the ACE2 receptor, specifically in the lungs (Lite et al. 2021). The gene-expression profile of the human lung was integrated with the host regulatory network to investigate the altered host signaling mechanism prevalent in the SARS-CoV-2 viral infection. The immune modulation in the constructed network, comprising 133 host proteins with 298 interactions that directly or indirectly connect to the ACE2 receptor, was also determined by functionally enriching the network. Results show that upon infection by SARS-CoV-2, the host lungs differentially regulated 29 proteins out of the 133 host proteins. The generation of a new network of the altered proteins by connecting with multiple proteins was observed to modulate kinase, cytokine, and carboxypeptidase activity. This modulation leads to changes in the host immune system, signal transduction mechanism, and cell cycle. Secondary health complications were apparent from an investigation indicating similar signaling events in the kidneys, pancreas, small intestine, testes, placenta, and adrenal glands (Lite et al. 2021). The interconnected protein hubs are assumed to be activated when the SARS-CoV-2 virus binds with the ACE2 receptor. The direct mediators of these protein hubs were AGT (Angiotensinogen), LAMAS (Laminin Subunit Alpha 1), NTS (Neurotensin), GHRL Ghrelin, and (ObestatinPrepropeptide) as revealed by the Interactome data. The association of a regulatory network with various biological pathways responsible for the disease, immune system, DNA repair, cell cycle, autophagy, programmed cell death, transcription, and signal transduction was revealed by the Reactome database (Lite et al. 2021). The presence of ACE2-inducible immune factors across different tissue types was found to be very profound in the lungs compared to other tissues, thus making it more sensitive. The overactive immune response leading to respiratory illness could be an underlying molecular factor (Lite et al. 2021). ACE2 modulator drugs such as angiotensin-converting enzyme inhibitor (ACEi) though having a suppressing effect on ACE2-induced hypertension and being reported to decrease mortality (Chu et al. 2021) but less likely to be used in patients with a prior chronic medical condition (like hypertension, diabetes mellitus, cardiovascular disease) may show unlike effects (Fang et al. 2020). In contrast to Na + absorption, K + secretion occurs through the cells of the epithelial lining of the renal tubule, but in COVID-19, uncontrolled K + secretion may induce a hypokalemic condition in infected individuals (Moreno-P et al. 2020) leading to the requirement of ventilation support required by severely infected patients. But a high K + concentration in the blood has an inhibitory role in the RAAS system (Poulsen and Fenton 2019); thus, K + -sparing drugs such as spironolactone is prescribed in few cases as an alternative way of combating SARS-CoV-2 by bypassing the adverse effects of ARBs and ACE (Cadegiani et al. 2020). In another article (Gumashta and Gumashta 2020), it is suggested that AT2R agonists might be a way to reduce the severity of the patients. Thus, it is well implicated that the ANG III/AT2R might be a new axis to be considered in drug designing. The aforementioned K + -sparing diuretics do not only increase K + reabsorption (Cadegiani et al. 2020) but may also increase Na + secretion by blocking the ENaCs (Fang et al. 2020). Amiloride, a potent potassium-sparing diuretic, has multivariate potential as it not only suppresses the ENaC activity (Bhagatwala et al. 2014) but also has a direct effect in reducing blood pressure,cardiovascular risks, and often edema observed during COVID infection (Hinrichs et al. 2018). Since aldosterone blockers might hamper the activity of ACE2 (Fang et al. 2020), ENaC-blocking drugs might be another bypass to adverse coronary effects. Research on SARS-CoV-2 survivors with chronic hypertension and without any hypertension is yet to be done, though ACE2 modulation may affect them in the future. Despite such a lack of knowledge of the risk of recurrence of Covid-19, long-term effects of medication and weakened immunity may trigger the infection again. However, reinfection of SARS-CoV-2 is still unknown. Also, the possibility of reinfection with SARS-CoV-2 is poorly understood. However, some recent works have found two distinct significant genetic variants of SARS-CoV-2 infection in the same individual and were reported to be a case of reinfection of SARS-CoV-2 (Tillett et al. 2020). But as such, the possibility of reinfection with SARS-CoV-2 is not well understood. Apart from vaccines, alternative medicines are available, and new medication with low side effects and preventive organ effects in this extraordinary situation may find a possible way out. Conclusion This review highlights how SARS-CoV-2 infection in synergy with the RAAS axis can inflict the modulation of the RAAS axis. Furthermore, this complex loop interdependence between RAAS and SARS-CoV-2 gets more complicated as apparent in several pathophysiological conditions like diabetes; RAAS axis modulated by immunosuppressive conditions may accelerate the infectivity of SARS-CoV-2 because these diseases are known to raise the comorbidity. Overall, the present review implies how COVID patients bear the alteration in their RAAS axis and, therefore, changes in the physiological and behavioral aspects are brought about. Chronic hypertensive individuals can be more susceptible to recurrence of the disease than the normal ones (Clark et al. 2021). Reinfection might be another problem that may arise in the future with a different and possibly more lethal strain of the SARS-CoV-2 virus. The development of new target-specific drugs against hypertension, i.e., less dependent on ACE2, is vital. Other than that, modifying ACE2 levels or activity in the target cells is highly solicited. Though this review ceases to show any reports of hypertension in COVID-19 survivors, active involvement of RAAS might be a possibility in triggering hypertension in high-risk groups of the population. Infected patients may bear the alteration in blood pressure and hypertension state through CoV-2-mediated RAAS alteration; therefore, it is imperative to develop a conjugated approach to directional and hypertension therapy in a synergistic approach. Several reports established that hypertension and the RAAS axis bear the potential role in the pathogenicity of SARS-CoV-2 infection (Mancia et al. 2020;Jarari et al. 2016). Also, several studies and trials are underway on how the RAAS inhibitors could modulate the infectivity of SARS-CoV-2 Vaduganathan et al. 2020). The identification and characterization of the specific drug target area and mechanism against SARS-CoV-2 remain an enigma to us, as is the role of antihypertensive drugs such as AREi and ARBs. Besides this, the possible mechanism behind the ACE2 modulator drugs or drug-induced modulation in K + and how they are involved in the possible lowering of the SARS-CoV-2 infection is still an area that requires clarity. Along with the infection of SARS-CoV-2, simultaneous modulation of the RAAS axis occurs as it targets the ACE2 receptor, which in turn remains an integral part of the RAAS axis. While the COVID-19 virus manifestations persist, there is a modulated RAAS axis posing immediate or post-COVID-19 threats for a prolonged time. This observation has not been focused upon and needs in-depth investigation or otherwise may culminate into neglect of significance. Possibly, along with the therapy against SARS-CoV-2, researchers must strategize solutions to the alteration in the RAAS axis simultaneously by providing an array of opportunities to reduce the severity and comorbidity in COVID-19 patients achieved through the designing of competent drugs. At this point, the possible drug target area against SARS-CoV-2 is still unclear to us. The same is true about the role of antihypertensive medications such as AREi and ARBs. The possible mechanism of ACE2 modulator drugs or drug-induced modulation in K + and how they are involved in lowering the SARS-CoV-2 infectivity is still an area under investigation. Data and materials availability Yes Author contribution Mr. Rohit Sen designed the framework of the review article and prepared the manuscript. Dr. Devashish Sengupta provided inputs about the biochemical and drug-related aspects of the review. Dr. Avinaba Mukherjee supervised the entire manuscript preparation and hypothesized the possible drug target area. Consent for publication Yes Competing interests None to declare.
Fabrication of Polyethyleneimine-Functionalized Magnetic Cellulose Nanocrystals for the Adsorption of Diclofenac Sodium from Aqueous Solutions Diclofenac sodium (DS), one of the most used non-steroidal anti-inflammatory drugs worldwide, is often detected in wastewater and natural water. This drug is ecotoxic, even at low concentrations. Therefore, it is essential to fabricate low-cost adsorbents that can easily and effectively remove DS from contaminated water bodies. In this study, a polyethyleneimine (PEI)-modified magnetic cellulose nanocrystal (MCNC) was prepared with a silane coupling agent as a bridge. TEM, FTIR, XRD, and VSM were used to demonstrate the successful preparation of MCNC-PEI. This composite adsorbent exhibited efficient DS removal. Furthermore, the adsorption performance of MCNC-PEI on DS was optimal under mildly acidic conditions (pH = 4.5). Adsorption kinetics showed that the adsorption process involves mainly electrostatic interactions. Moreover, the maximum adsorption capacity reached 299.93 mg/g at 25 °C, and the adsorption capacity only decreased by 9.9% after being reused five times. Considering its low cost, low toxicity, and high DS removal capacity, MCNC-PEI could be a promising adsorbent for treating DS-contaminated water. Introduction Diclofenac sodium (DS) is a popular non-steroidal anti-inflammatory drug and is often used in clinical treatment of rheumatic diseases. Worldwide consumption of DS is approximately 1443 tons [1]. Owing to the huge production and usage of DS globally, it is frequently detected in surface water [2,3]. DS can result in toxic biological effects and microbial drug resistance of different living organisms in water and even cause toxic effects on aquatic and terrestrial ecosystems, thereby posing a great threat to the environmental and human health [4,5]. The adsorption method is an effective means for removing organic pollutants from water [6]. This low-cost method is simple to operate and has no byproducts [7]. Common adsorption materials, such as bentonite, zeolite, and cellulose [8][9][10], possess advantages including a developed pore structure, high adsorption capacity, and low cost. However, issues such as poor adsorption regeneration and difficult solid-liquid separation greatly limit the practical application of these adsorbents. In recent years, magnetic nanoadsorbents have attracted extensive attention by compensating for the deficiencies of conventional adsorbents in water treatment [11]. Adsorbents containing magnetic nanoparticles can be easily separated with an external magnetic field [12,13]. Cellulose is the most abundant natural polysaccharide in nature and is advantageous for being cheap, biodegradable, and renewable [14][15][16][17]. Meanwhile, cellulose nanocrystals (CNC) can be extracted from cellulose and have a diameter of approximately 100 nm and a length ranging from hundreds of nanometers to several microns. The tensile strength of CNC was observed to be equivalent to that of cast iron. Moreover, CNC has been widely used an adsorbent for water treatment because of its high mechanical strength, adjustable surface chemical properties, and high specific surface area [18,19]. Because of the good hydrophilicity and stable colloidal properties of CNC, magnetic nanoparticles can be deposited on CNC by simple coprecipitation [20]. Therefore, magnetic adsorbents based on CNC have been widely studied [21,22]. These adsorbents are mostly beneficial in heavy metal removal, owing to the singleness of functional groups in magnetic nanoparticles and CNC [23]. Additionally, the adsorption mechanism involved in the removal of heavy metal ions by magnetic adsorbents is the electrostatic interaction between the positive and negative charges in the adsorbent and adsorbate [24]. However, the removal of organic pollutants in water, especially DS, using magnetic nanocrystalline cellulose (MCNC) has rarely been reported. This may be attributed to the lack of a functional group that can capture DS in MCNC. The adsorption capacity of chemically modified cellulose for various aquatic pollutants generally exceeds the adsorption capacity of unmodified cellulose. Many chemicals, such as inorganic nanoparticles, organic acids, and organic bases, have been used to modify cellulose. Polyethyleneimine (PEI) is a functional compound that is widely used for adsorption and is popular for its high amino content and high reactivity. More importantly, past studies have shown that DS can be adsorbed by the rich ∓NH 2 groups in PEI [25]. However, PEI requires a solid carrier, owing to its high water solubility. Therefore, PEI is often integrated into various solid adsorbents to improve their adsorption performance [26]. Lu et al. reported a type of aminated silica/polyvinyl alcohol/chitosan composite bead that was cross-linked with PEI, which exhibited remarkable DS removal ability [27]. The only functional group in CNC is a hydroxyl group, which cannot directly react with PEI. Therefore, a bridge that can react with both CNC and PEI is essential. [3-(2,3-epoxypropoxy) propyl] trimethoxysilane (EPPTMS) is a routine silane coupling agent containing silyloxy on one end, which can react with hydroxyl groups in cellulose via a dehydration-condensation reaction. The other end comprises the epoxy group, which can react with the amino groups in PEI [28]. In this study, we used EPPTMS to connect MCNC and PEI and to create an adsorbent that can effectively remove DS from water. The fabricated adsorbent was optimized based on the adsorption parameters, such as aqueous solution pH, adsorbent dosage, adsorption time, DS concentration, and adsorbent regeneration. The performance of our proposed adsorbent was also assessed and compared with previously reported adsorbents. Preparation of MCNC-PEI MCNC was prepared using a coprecipitation method. Specifically, 2 g of CNC was uniformly and ultrasonically dispersed in deionized water for 5 min. Subsequently, an N2 atmosphere was introduced into the dispersion to remove the O2. Then, FeCl3 (4.8 g) and FeCl2 (1.8 g) were added to the dispersion, the temperature of the reaction system was maintained at 75 • C, and the pH was adjusted to 9 using ammonia. The pH value was measured using a pH test paper by dropping a small amount of reaction solution onto the test paper. The reaction proceeded for 30 min. The product was consecutively washed with water and ethanol and then freeze-dried [29]. MCNC-PEI was prepared as follows: EPPTMS was grafted onto the surface of MCNC via hydrolysis of the silanol groups and hydroxyl. Specifically, MCNC (1 g) and EPPTMS (0.75 g) were added to the 50 mL dimethylformamide (DMF) solution (containing 2 mL of water), and the mixture was stirred at 80 • C for 24 h under condensation reflux, which yielded MCNC-EPPTMS. Then, the reaction continued, following the addition of 1 g of PEI. After 24 h, the product was separated using a magnet and washed with DMF and ethanol. The adsorbent was finally freeze-dried for further study [30]. Adsorption and Desorption Experiment The DS solution was prepared from 5 to 500 mg/L concentrations, and the pH of the DS solution was adjusted by dilute acid and alkali solutions from 4.5 to 7.5. All adsorption experiments were done by mixing MCNC-PEI in a 20 mL DS solution. The dose of MCNC-PEI ranged from 5 mg to 30 mg. The adsorption temperature was then maintained at 25 • C. The adsorption properties at two other temperatures (20 • C and 30 • C) were investigated for isothermal model analysis. All adsorption experiments were also carried out in an oscillator with a disturbance frequency of 120 rpm. Equations (1) and (2) were used to calculate the instantaneous adsorption capacity (q t ) and equilibrium adsorption capacity (q e ) of the adsorbent for DS [31]. where C 0 (mg/L) is the initial concentration, C t (mg/L) is the concentration of DS at time t, m (g) is the dose of MCNC-PEI, V (L) is the volume of the adsorbed solution, and C e (mg/L) is the equilibrium concentration of DS. The desorption experiments were conducted as follows: The DS that adsorbed on MCNC-PEI was eluted with NaOH (0.1 M). Then, MCNC-PEI-DS was added to the NaOH (0.1 M) solution, and ultrasonic vibrations were used to speed up the removal of DS from MCNC-PEI. The MCNC-PEI-DS was further eluted with an NaOH (0.1 M) solution several times until the DS could not be detected in the eluate. After freeze drying, the eluted MCNC-PEI was used to adsorb the DS under the same conditions. The operation was repeated four times, and the q e of MCNC-PEI for DS was measured each time. Characterization The physical structures of the adsorbents were analyzed using transmission electron microscopy (TEM, Hitachi SU8010, Tokyo, Japan). The samples observed by TEM were dispersed in ethanol and dropped onto a copper grid. Fourier transform infrared (FTIR, Burker TENSOR II, Karlsruhe, Germany) spectra were recorded to study the functional groups of the adsorbents. The samples were compressed with potassium bromide and analyzed by OMNIC™ Specta software. Furthermore, the surface compositions of MCNC-PEI before and after the adsorption of DS were determined by X-ray photoelectron spectroscopy (XPS, Thermo ESCALAB 250XI, Waltham, MA, USA) using monochromatic Al Ka radiation at 1486.6 eV. The results were analyzed by Casa XPS software. An X-ray diffractometer (XRD, Bruker D8 ADVANCE, Karlsruhe, Germany) was used to obtain the crystalline structure of the adsorbent. Lastly, the magnetic properties of the adsorbents were measured using a vibration sample magnetometer (VSM, Quantum Design PPMS DynaCool, Santiago, CA, USA) at a temperature of 25 • C and an external magnetic field of 20 kOe. Preparation Scheme of MCNC-PEI In this study, MCNC was prepared by the coprecipitation method. Owing to the surface electronegativity of CNC, Fe 3+ and Fe 2+ can be enriched on the surface of CNC. After adjusting the pH to alkaline values, CNC loaded with magnetic nanoparticles was obtained. Magnetic adsorbents have unique advantages in terms of adsorbent separation. Moreover, the Si-OH bond formed by hydrolysis of the silane coupling agent can react with the hydroxyl group in MCNC by a dehydration-condensation reaction. Therefore, in this study, a silane coupling agent containing an epoxy group was grafted onto the MCNC. PEI is rich in amino groups, which can react with epoxy groups to graft PEI onto the MCNC. The reaction scheme of the preparation process of MCNC-PEI is shown in Figure 1. Preparation Scheme of MCNC-PEI In this study, MCNC was prepared by the coprecipitation method. Owing to the surface electronegativity of CNC, Fe 3+ and Fe 2+ can be enriched on the surface of CNC. After adjusting the pH to alkaline values, CNC loaded with magnetic nanoparticles was obtained. Magnetic adsorbents have unique advantages in terms of adsorbent separation. Moreover, the Si-OH bond formed by hydrolysis of the silane coupling agent can react with the hydroxyl group in MCNC by a dehydration-condensation reaction. Therefore, in this study, a silane coupling agent containing an epoxy group was grafted onto the MCNC. PEI is rich in amino groups, which can react with epoxy groups to graft PEI onto the MCNC. The reaction scheme of the preparation process of MCNC-PEI is shown in Figure 1. Figure 2a shows the TEM image of Fe3O4 nanoparticles. These magnetic nanoparticles are spherical and show significant aggregation. After the MCNC was prepared via the coprecipitation method, CNC with a larger particle size was attached to the Fe3O4 nanoparticles, as shown in Figure 2b. Meanwhile, Figure 2c shows the morphology of the MCNC-PEI. The MCNC was coated with a uniform cross-linked polymer network structure, which results from the PEI grafting on MCNC by the reaction between the PEI and epoxy group. In addition, the coverage of the polymer resulted to the aggregation of MCNC, so the particle size of MCNC-PEI increased. FTIR Analysis FTIR spectra were used to further explain the changes in the functional groups of adsorbents. In the wavenumber range of 4000-400 cm −1 , as shown in Figure 3a, the Figure 2a shows the TEM image of Fe 3 O 4 nanoparticles. These magnetic nanoparticles are spherical and show significant aggregation. After the MCNC was prepared via the coprecipitation method, CNC with a larger particle size was attached to the Fe 3 O 4 nanoparticles, as shown in Figure 2b. Meanwhile, Figure 2c shows the morphology of the MCNC-PEI. The MCNC was coated with a uniform cross-linked polymer network structure, which results from the PEI grafting on MCNC by the reaction between the PEI and epoxy group. In addition, the coverage of the polymer resulted to the aggregation of MCNC, so the particle size of MCNC-PEI increased. Preparation Scheme of MCNC-PEI In this study, MCNC was prepared by the coprecipitation method. Owing to the surface electronegativity of CNC, Fe 3+ and Fe 2+ can be enriched on the surface of CNC. After adjusting the pH to alkaline values, CNC loaded with magnetic nanoparticles was obtained. Magnetic adsorbents have unique advantages in terms of adsorbent separation. Moreover, the Si-OH bond formed by hydrolysis of the silane coupling agent can react with the hydroxyl group in MCNC by a dehydration-condensation reaction. Therefore, in this study, a silane coupling agent containing an epoxy group was grafted onto the MCNC. PEI is rich in amino groups, which can react with epoxy groups to graft PEI onto the MCNC. The reaction scheme of the preparation process of MCNC-PEI is shown in Figure 1. Figure 2a shows the TEM image of Fe3O4 nanoparticles. These magnetic nanoparticles are spherical and show significant aggregation. After the MCNC was prepared via the coprecipitation method, CNC with a larger particle size was attached to the Fe3O4 nanoparticles, as shown in Figure 2b. Meanwhile, Figure 2c shows the morphology of the MCNC-PEI. The MCNC was coated with a uniform cross-linked polymer network structure, which results from the PEI grafting on MCNC by the reaction between the PEI and epoxy group. In addition, the coverage of the polymer resulted to the aggregation of MCNC, so the particle size of MCNC-PEI increased. FTIR Analysis FTIR spectra were used to further explain the changes in the functional groups of adsorbents. In the wavenumber range of 4000-400 cm −1 , as shown in Figure 3a, the FTIR Analysis FTIR spectra were used to further explain the changes in the functional groups of adsorbents. In the wavenumber range of 4000-400 cm −1 , as shown in Figure 3a, the adsorption peak at 579 cm −1 is the characteristic absorption peak of Fe-O bands for stretching vibration in the crystalline lattice of Fe 3 O 4 . The characteristic absorption peak of -OH in Fe 3 O 4 was at 3388 cm −1 [32]. The peak at 3441 cm −1 corresponds to the stretching vibration peak of the hydroxyl group in cellulose. Moreover, the peak at 2920 cm −1 corresponds to the stretching vibration of -CH. The peak at 1635 cm −1 corresponds to -OH bending vibration of absorbed water. The peak at 1053 cm −1 corresponds to the C-O-C pyranose ring stretching vibration [33,34]. The increase in the intensity of the band indicates increased cellulose content. These characteristic peaks of Fe 3 O 4 and CNC are retained in the FTIR spectrum of MCNC, which confirms the successful preparation of MCNC. Figure 3b shows the FTIR spectrum of MCNC-PEI. In addition to the characteristic peaks of MCNC, the FTIR spectrum of MCNC-PEI also showed its characteristic absorption peaks. The peak at 1082 cm −1 corresponds to the stretching vibration of Si-O-Si [35], whereas the peak at 3415 cm −1 corresponds to the characteristic positions of -OH and -NH groups. The peak at 1648 cm −1 corresponds to the overlapping of the bending vibrations of N-H and -OH of absorbed water. The locally magnified image of Figure 3b at the wavenumber range of 1400-1150 cm −1 . The small absorption peak at 1336 cm −1 corresponds to the stretching vibration of C-N due to the grafting of PEI [36]. The FTIR spectra further prove that MCNC-PEI was successfully prepared. Polymers 2022, 14, x FOR PEER REVIEW 5 of 14 adsorption peak at 579 cm −1 is the characteristic absorption peak of Fe-O bands for stretching vibration in the crystalline lattice of Fe3O4. The characteristic absorption peak of -OH in Fe3O4 was at 3388 cm −1 [32]. The peak at 3441 cm −1 corresponds to the stretching vibration peak of the hydroxyl group in cellulose. Moreover, the peak at 2920 cm −1 corresponds to the stretching vibration of -CH. The peak at 1635 cm −1 corresponds to -OH bending vibration of absorbed water. The peak at 1053 cm −1 corresponds to the C-O-C pyranose ring stretching vibration [33,34]. The increase in the intensity of the band indicates increased cellulose content. These characteristic peaks of Fe3O4 and CNC are retained in the FTIR spectrum of MCNC, which confirms the successful preparation of MCNC. Figure 3b shows the FTIR spectrum of MCNC-PEI. In addition to the characteristic peaks of MCNC, the FTIR spectrum of MCNC-PEI also showed its characteristic absorption peaks. The peak at 1082 cm −1 corresponds to the stretching vibration of Si-O-Si [35], whereas the peak at 3415 cm −1 corresponds to the characteristic positions of -OH and -NH groups. The peak at 1648 cm −1 corresponds to the overlapping of the bending vibrations of N-H and -OH of absorbed water. The locally magnified image of Figure 3b at the wavenumber range of 1400-1150 cm −1 . The small absorption peak at 1336 cm −1 corresponds to the stretching vibration of C-N due to the grafting of PEI [36]. The FTIR spectra further prove that MCNC-PEI was successfully prepared. XRD Analysis XRD was used to reflect the changes in crystal structure within the composite adsorbent. As shown in the Figure 4a, the diffraction peak at 22.8° corresponds to the crystal plane (002) of CNC [37]. The six diffraction peaks at 30.2°, 35.6°, 43.2°, 53.7°, 57.2°, and 62.9° correspond to the crystal plane of Fe3O4 (220), (311), (400), (422), (511), and (440), respectively [38,39]. When MCNC was modified using the silane coupling agent and PEI, the diffraction peak at 62.7° increased to a larger angle, indicating that the particle size of MCNC was larger than that of MCNC-PEI, which is consistent with the conclusion of TEM analysis. XRD Analysis XRD was used to reflect the changes in crystal structure within the composite adsorbent. As shown in the Figure 4a, the diffraction peak at 22.8 • corresponds to the crystal plane (002) of CNC [37]. The six diffraction peaks at 30. (440), respectively [38,39]. When MCNC was modified using the silane coupling agent and PEI, the diffraction peak at 62.7 • increased to a larger angle, indicating that the particle size of MCNC was larger than that of MCNC-PEI, which is consistent with the conclusion of TEM analysis. VSM The magnetic properties of the adsorbent are characterized by VSM. As shown in Figure 4b, the hysteresis loops of MCNC and MCNC-PEI exhibit the same change. Therefore, the modification process of MCNC by PEI does not influence the magnetic proper- VSM The magnetic properties of the adsorbent are characterized by VSM. As shown in Figure 4b, the hysteresis loops of MCNC and MCNC-PEI exhibit the same change. Therefore, the modification process of MCNC by PEI does not influence the magnetic properties. Moreover, the saturation magnetization of MCNC is 41.21 emu/g, whereas the saturation magnetization of MCNC-PEI is 36.94 emu/g. The saturation magnetic strength of MCNC-PEI decreased, owing to the coating of PEI. In practice, MCNC-PEI can still be quickly separated from water. XPS Analysis The weak peak of Si 2p in Figure 5a was derived from the combination of the silane coupling agent and MCNC. Meanwhile, the Fe 2p 3/2 peak can be fitted as two peaks at 710.5 eV and 712.8 eV, as shown in Figure 5b. These two peaks belong to Fe 3+ and Fe 2+ of Fe 3 O 4 , and the ratios of Fe 3+ /Fe 2+ are 4.9 and 6.2, respectively. This ratio is slightly higher than the ratio of Fe 3+ /Fe 2+ in Fe 3 O 4 , indicating that some of the magnetic nanoparticles in MCNC-PEI are oxidized, and the particles are further oxidized in the adsorption process. Figure 5c shows the XPS of N 1s. This element originated from PEI, which further confirms that PEI is successfully coated on MCNC. After adsorbing DS, a part of -NH 2 is converted to -NH 3 + . This indicates that the combination of PEI and DS is in the form of -NH 3 + . Figure 5d shows the C 1s peak. The area ratio of the C-C peak increased slightly, while the other ratios decreased. This is because DS is loaded on MCNC-PEI, which changes the area ratio of each fitting peak of C 1s. The XPS results illustrate that PEI successfully modified the MCNC and plays an important role in DS removal. Optimization of Adsorption Conditions Solution pH is one of the important factors affecting adsorption performance. To further study the adsorption mechanism, MCNC and MCNC-PEI were placed in DS solutions with different pH values. The DS adsorption capacities of MCNC and MCNC-PEI at different pH values showed opposite trends. Hydrogen bonds can be formed between MCNC-PEI and PEI. In a more acidic condition, increasing free hydrogen ions can destroy the hydrogen bonds, and excessive hydroxyl radicals in an alkaline solution can result in the disassociation of hydrogen bonds. Therefore, the adsorption capacity of MCNC for DS Optimization of Adsorption Conditions Solution pH is one of the important factors affecting adsorption performance. To further study the adsorption mechanism, MCNC and MCNC-PEI were placed in DS solutions with different pH values. The DS adsorption capacities of MCNC and MCNC-PEI at different pH values showed opposite trends. Hydrogen bonds can be formed between MCNC-PEI and PEI. In a more acidic condition, increasing free hydrogen ions can destroy the hydrogen bonds, and excessive hydroxyl radicals in an alkaline solution can result in the disassociation of hydrogen bonds. Therefore, the adsorption capacity of MCNC for DS was higher under neutral conditions. As shown in Figure 6a, the removal ability of DS by MCNC-PEI was more optimized in a slightly acidic environment (pH = 4.5-5.5), where the carboxyl group in DS is dissociated, whereas the amino group in PEI is in a quaternary ammonium state. Therefore, MCNC-PEI can easily capture DS through charge action. Figure 6b shows the states of MCNC-PEI and DS at different pH values [40]. Furthermore, the adsorption capacity was significantly affected by the quantity of adsorbent. As shown in Figure 6c, a higher adsorption capacity (211.4 mg/g) was obtained with a lower adsorbent dose (0.5 mg/L), whereas the removal rate of DS is only 52.85%. Increasing the adsorbent dose also gradually increased the DS removal rate. However, it was accompanied by a decrease in adsorption capacity. Therefore, when selecting the adsorbent dosage, the DS concentration must be considered to ensure optimal removal efficiency. Based on the experimental results, we chose the amount of adsorbent as 1 mg/L for further tests. The time taken for adsorption to reach equilibrium was closely linked to the concentration of DS. The adsorption capacity of DS at different adsorption times was studied, with concentrations of DS ranging from 50 to 200 mg/L; the results are shown in Figure 6d. As the DS concentration increased, the time required for adsorption to reach equilib- Furthermore, the adsorption capacity was significantly affected by the quantity of adsorbent. As shown in Figure 6c, a higher adsorption capacity (211.4 mg/g) was obtained with a lower adsorbent dose (0.5 mg/L), whereas the removal rate of DS is only 52.85%. Increasing the adsorbent dose also gradually increased the DS removal rate. However, it was accompanied by a decrease in adsorption capacity. Therefore, when selecting the adsorbent dosage, the DS concentration must be considered to ensure optimal removal efficiency. Based on the experimental results, we chose the amount of adsorbent as 1 mg/L for further tests. The time taken for adsorption to reach equilibrium was closely linked to the concentration of DS. The adsorption capacity of DS at different adsorption times was studied, with concentrations of DS ranging from 50 to 200 mg/L; the results are shown in Figure 6d. As the DS concentration increased, the time required for adsorption to reach equilibrium also increased. When the concentration was 50 mg/L, it took 30 min for adsorption to reach equilibrium and up to 90 min at a DS concentration of 200 mg/L. In addition, a higher concentration of DS caused a greater adsorption capacity of MCNC-PEI for DS. The adsorption process is the capture process of DS by MCNC-PEI; therefore, a high DS concentration prolongs the capture process. Moreover, the adsorption capacities of MCNC-PEI at different temperatures and DS concentrations are shown in Figure 6e. The adsorption rate gradually decreased as the DS concentration increased. This indicates that the adsorption capacity of the adsorbent tends to be saturated. In addition, appropriately increasing the temperature increases the adsorption capacity of MCNC-PEI because the temperature increase accelerates the molecular motion, thus increasing the binding probability between DS and the adsorbent, which leads to a higher adsorption capacity. Adsorbent regeneration pertains to the use of physical or chemical methods to separate or decompose the adsorbate on the adsorbent surface and to restore the adsorption performance and reusability. A regenerating adsorbent reduces the treatment cost. In this study, the regeneration of MCNC-PEI-DS was achieved by alkali extraction. The adsorption capacity of MCNC-PEI for DS declined by 9.9% after it had been used five times, as shown in Figure 6f. This shows that the adsorption capacity of MCNC-PEI can maintain long-term stability. According to Table 1, MCNC-PEI has a higher adsorption capacity for DS compared to previously reported adsorbents. This is related to a variety of adsorption mechanisms between MCNC-PEI and DS, including hydrogen bonds and electrostatic interactions. In addition, as a type of polyamine polymer, PEI has a high DS binding ability, and its advantage is significantly higher than that of chitosan. Adsorption Mechanism Adsorption is a process whereby adsorbates are enriched with solid adsorbents, either by physical or chemical means. Physical adsorption mainly depends on van der Waals forces and does not involve electron transfer. This adsorption mode has a low capacity, and the adsorption and desorption processes are rapid. Meanwhile, chemical adsorption involves the complexation of adsorbate molecules with functional groups on the surface of the adsorbent [44]. This process requires activation energy. Compared with physical desorption, the adsorption and separation rates are slower. In addition, the adsorption rate increases as the temperature increases [45,46]. The kinetic model explores and predicts the adsorption mechanism. The data for MCNC-PEI adsorption of DS was fitted by the pseudo-first-order and pseudo-secondorder kinetic models. The pseudo-first-order model describes an adsorption process that is controlled by diffusion and involves only a single adsorption site. The pseudo-second-order kinetic model accounts for the adsorption process that involves the sharing or transfer of electron pairs between the adsorbate and adsorbent, and there are two binding sites on the adsorbent surface [47]. The equations for the two adsorption kinetic models are as follows [48]: q t = q 2 e k 2 t 1 + tk 2 q e (4) where k 1 and k 2 are the respective adsorption rate constants of the two adsorption kinetic models. The results are shown in Figure 7, and the relevant parameters are presented in Table 2. Parametric analysis of these models show that the adsorption process of MCNC-PEI for DS is more consistent with the pseudo-second-order kinetic model; therefore, the adsorption process involves the sharing or transfer of electron pairs. on the adsorbent surface [47]. The equations for the two adsorption kinetic models are as follows [48]: q t = q e 2 k 2 t 1 + tk 2 q e (4) where k1 and k2 are the respective adsorption rate constants of the two adsorption kinetic models. The results are shown in Figure 7, and the relevant parameters are presented in Table 2. Parametric analysis of these models show that the adsorption process of MCNC-PEI for DS is more consistent with the pseudo-second-order kinetic model; therefore, the adsorption process involves the sharing or transfer of electron pairs. The adsorption isotherm model includes the Freundlich and Langmuir isotherm models. The Freundlich isotherm model is used to describe the mechanism of multilayer adsorption and heterogeneous adsorption systems, so it has a wide range of applications in the study of adsorption mechanisms. Meanwhile, the Langmuir adsorption isotherm model is used to describe monolayer adsorption. The basic assumptions of the model are as follows: the adsorbent surface energy is uniform, the intermolecular force of the adsorbate is disregarded, and the adsorption process is reversible [49]. The adsorption capacities for the Freundlich and Langmuir isotherm models are given by Equations (5) and (6), respectively [50]: q e = q m K L C e 1 + K L C e where qm (mg/g) represents the maximum adsorption capacity of MCNC-PEI, KL (L/mg) is a constant for the adsorption affinity; and n and KF (mg/g) are the adsorption strength and constant, respectively. The adsorption isotherm model includes the Freundlich and Langmuir isotherm models. The Freundlich isotherm model is used to describe the mechanism of multilayer adsorption and heterogeneous adsorption systems, so it has a wide range of applications in the study of adsorption mechanisms. Meanwhile, the Langmuir adsorption isotherm model is used to describe monolayer adsorption. The basic assumptions of the model are as follows: the adsorbent surface energy is uniform, the intermolecular force of the adsorbate is disregarded, and the adsorption process is reversible [49]. The adsorption capacities for the Freundlich and Langmuir isotherm models are given by Equations (5) and (6), respectively [50]: q e = q m K L C e 1 + K L C e (6) where q m (mg/g) represents the maximum adsorption capacity of MCNC-PEI, K L (L/mg) is a constant for the adsorption affinity; and n and K F (mg/g) are the adsorption strength and constant, respectively. These isothermal adsorption models were used to fit the data for DS adsorption by MCNC-PEI. The fitting curves of the adsorption data of MCNC-PEI at different temperatures are shown in Figure 8, and the relevant parameters are listed in Table 3. The nonlinear fitting coefficients (R 2 ) indicate that the adsorption process is consistent with the Langmuir isotherm model. Moreover, the DS adsorption process of MCNC-PEI is monolayer adsorption, and the active sites on the adsorbent surface were uniform. The q max fitted by the Langmuir isotherm equation at 293.2 K is 297.49 mg/g, and the value increases as the adsorption temperature increases, which shows that the adsorption process is endothermic. Increasing the temperature contributes to improved adsorption performance of MCNC-PEI on DS. The DS removal advantage of MCNC-PEI was assessed in comparison with adsorbents that were previously reported in literature. The comparison was based on the q max value obtained from the Langmuir isotherm model, and the values are presented in Table 4. The proposed MCNC-PEI clearly has the highest maximum absorption capacity for DS among other absorbents in the literature. These isothermal adsorption models were used to fit the data for DS adsorption by MCNC-PEI. The fitting curves of the adsorption data of MCNC-PEI at different temperatures are shown in Figure 8, and the relevant parameters are listed in Table 3. The nonlinear fitting coefficients (R 2 ) indicate that the adsorption process is consistent with the Langmuir isotherm model. Moreover, the DS adsorption process of MCNC-PEI is monolayer adsorption, and the active sites on the adsorbent surface were uniform. The qmax fitted by the Langmuir isotherm equation at 293.2 K is 297.49 mg/g, and the value increases as the adsorption temperature increases, which shows that the adsorption process is endothermic. Increasing the temperature contributes to improved adsorption performance of MCNC-PEI on DS. The DS removal advantage of MCNC-PEI was assessed in comparison with adsorbents that were previously reported in literature. The comparison was based on the qmax value obtained from the Langmuir isotherm model, and the values are presented in Table 4. The proposed MCNC-PEI clearly has the highest maximum absorption capacity for DS among other absorbents in the literature. Conclusions An environmentally friendly and convenient adsorbent was constructed in this work. MCNC-PEI shows good paramagnetism and can be quickly separated and recovered from liquid by an external magnetic field. The adsorption process of MCNC-PEI on DS relies mainly on hydrogen bonding and charge interaction. Experimental results confirm that CNC has a strong DS removal capacity. The optimal amount of adsorbent to use is 1 mg/mL. The adsorption process of MCNC-PEI on DS is consistent with the pseudosecond-order kinetic and Langmuir isotherm models. The maximum adsorption capacity can reach 300.19 mg/g at 30 • C. In addition, MCNC-PEI has shown excellent recyclability. Adsorbents are playing a considerable role in the treatment of water environments the efficient and fast composite adsorbents gradually show their advantages. Based on these advantages, MCNC-PEI is a promising adsorbent material for the removal of DS.
Evaluating single-point quantitative magnetization transfer in the cervical spinal cord: Application to multiple sclerosis Spinal cord (SC) damage is linked to clinical deficits in patients with multiple sclerosis (MS), however, conventional MRI methods are not specific to the underlying macromolecular tissue changes that may precede overt lesion detection. Single-point quantitative magnetization transfer (qMT) is a method that can provide high-resolution indices sensitive to underlying macromolecular composition in a clinically feasible scan time by reducing the number of MT-weighted acquisitions and utilizing a two-pool model constrained by empirically determined constants. As the single-point qMT method relies on a priori constraints, it has not been employed extensively in patients, where these constraints may vary, and thus, the biases inherent in this model have not been evaluated in a patient cohort. We, therefore, addressed the potential biases in the single point qMT model by acquiring qMT measurements in the cervical SC in patient and control cohorts and evaluated the differences between the control and patient-derived qMT constraints (kmf, T2fR1f, and T2m) for the single point model. We determined that the macromolecular to free pool size ratio (PSR) differences between the control and patient-derived constraints are not significant (p > 0.149 in all cases). Additionally, the derived PSR for each cohort was compared, and we reported that the white matter PSR in healthy volunteers is significantly different from lesions (p < 0.005) and normal appearing white matter (p < 0.02) in all cases. The single point qMT method is thus a valuable method to quantitatively estimate white matter pathology in MS in a clinically feasible scan time. Introduction In patients with multiple sclerosis (MS), spinal cord (SC) damage is often thought to be responsible for a majority of clinically noted deficits (Ikuta and Zimmerman, 1976;Kearney et al., 2015a). The SC is less than one-tenth the size of the brain; thus, even small lesions can affect significant portions of the SC and in some cases even a small lesion in a white matter (WM) column can impair all function derived from that column. Radiologically, several studies have shown that SC pathology may provide a more direct indicator of disease progression and clinical disability than the brain can provide alone (Miller, 1994;Patrucco et al., 2012). Conventional T 1 and T 2 magnetic resonance imaging (MRI) methods are sensitive to inflammatory and water content change in SC MS disease (Gass et al., 1998) but not specific (Bergers et al., 2002) to axonal damage or demyelination. As a result, T 1 -weighted (T 1 -w) and T 2 -w MRI contrasts remain poor indicators of the static, severity of the disease and/or disease progression. Alternatively, magnetization transfer (MT) imaging has been shown to be sensitive to changes in myelin content, even in areas free from macroscopic lesions (Koenig, 1991;Kucharczyk et al., 1994;Schmierer et al., 2007). Free water protons observed with conventional MRI methods (T 1 -w and T 2 -w imaging) are in exchange with protons associated with semiimmobile macromolecules in tissue (Wolff and Balaban, 1989) and thus MT is the biophysical phenomenon whereby an off-resonance (with respect to water) saturation is transferred to the free water from the semi-solid-like protons through dipole-dipole or direct chemical exchange. The magnitude of the MT effect has traditionally been quantified by the magnetization transfer ratio (MTR) and has been used in several studies to show that MT is correlated with myelin content (Filippi and Rocca, 2007;Laule et al., 2007;Schmierer et al., 2007). However, the MTR is only semi-quantitative and is significantly dependent on the MR acquisition parameters, as well as B 1 and B 0 inhomogeneities (Berry et al., 1999) and other non-MT-specific NMR parameters (Henkelman et al., 1993;Stanisz et al., 2005). Some of the limitations of the MTR have been rectified by modeling (often via a two-pool model) the MT signal Pike, 2000, 2001) as a function of offset-frequency of saturation and quantitatively deriving indices such as the macromolecular to free pool size ratio (PSR) and is termed quantitative MT (qMT). qMT typically utilizes several images at multiple RF irradiation powers and/or offsets, from which a so-called MT Z-spectrum can be generated for each voxel (Hinton and Bryant, 1996). In general, the measured Z-spectrum and associated fit (e.g. with a two-pool modelfree [f] and macromolecular [m]) generate several indices, including the PSR (Dortch et al., 2010;Gochberg et al., 1997), the MT exchange rate from the macromolecular to pool to the free pool (k mf ), and the transverse and longitudinal relaxation rates for each pool . There is increased Interest in estimating the PSR and several studies Ou et al., 2009b;Rausch et al., 2009;Samsonov et al., 2012;Schmierer et al., 2007;Underhill et al., 2011) have shown a strong correlation between the PSR and white matter myelin density. Indeed, several studies of MS have incorporated qMT in the brain of MS patients showing association between the PSR and MR measures of myelin (Davies et al., 2004;Levesque et al., 2010;Tozer et al., 2003). In principle, however, qMT studies have been limited by long acquisition times due to the need to collect multiple MT-weighted images, the demand for high signal to noise ratio (SNR), and thus, are difficult to implement clinically. Recently, fast whole-brain mapping of the PSR using only a single MT-weighted image (and a reference image) (Yarnykh, 2012) was developed and our group subsequently applied this method in the SC (Smith et al., 2014) and the thigh (Li et al., 2015) in healthy volunteers. This fast qMT estimation procedure (single-point qMT) is accomplished by utilizing expected or measured constraints on the individual indices that comprise the two-pool model (e.g. the exchange rate, macromolecular T 2 , and the combined value T 2 R 1 of the water poolsee Yarnykh, 2012 for a full derivation of this model), providing an opportunity to sample fewer points along the MT z-spectrum and utilize the scan time savings for improved resolution or more rapid acquisitions. However, to date, the single point qMT has only been applied in one patient study in the brain of MS patients (Yarnykh et al., 2015) utilizing constraints derived from healthy volunteers alone. We seek to further this work by assessing whether a set of healthycohort-derived constraints placed on the two-pool qMT model sufficiently captures the variation within patients with multiple sclerosis as assessed in the cervical spinal cord of patients with MS, or whether it is necessary to derive individual patient-centered constraints to accurately estimate the PSR and it's sensitivity to MS pathology. Data acquisition The local Institutional Review Board approved this study, and signed informed consent was obtained prior to examination. Data were obtained from two cohorts: 1) thirteen healthy volunteers (8 males, age range 24-33 years, mean ± standard deviation [SD] age 25 ± 2.5 years) and 2) eight relapsing-remitting MS (RRMS) patients (4 males, age range 30-49 years, mean ± SD age 40.5 ± 5.37 years, median EDSS score = 0, range 0-3.5) along with a primary progressive MS (PPMS) patient (male, 60 years old, EDSS score 5). Some healthy volunteers overlapped with an existing qMT study (Smith et al., 2014). Table 2 provides relevant clinical and demographic characteristics of the patients. All MRI data were acquired on a 3.0 tesla Philips Achieva scanner (Philips Healthcare, Best, The Netherlands, software versions R3.2.2.0 and R5.1.7.1). A quadrature body coil was used for excitation, and a 16channel neurovascular coil was used for signal reception. The field-ofview (FOV) was centered between the C3 and C4 vertebral bodies and spanned, at minimum, the C2 to C5 vertebral levels in all subjects. Second-order shimming was used to minimize image artifacts arising from susceptibility differences between bone and tissue. The same MT protocol from Smith et al. (2014) was used here: two MT pulse sequences were performed: 1) a low spatial resolution acquisition (1 × 1 mm 2 ) at 8 offsets (Δ ω) and 2 powers (α RF ) with a "fullfit" analysis (Yarnykh, 2002;Yarnykh and Yuan, 2004) and 2) a highresolution acquisition (0.65 × 0.65 mm 2 ) at 1 offset and power to be used with a "single-point" analysis (Yarnykh, 2012). For the full-fit qMT experiment, MT-weighted images were acquired using a 3D MT-prepared spoiled gradient echo sequence (Sled and Pike, 2001) with a GRE readout and SENSE acceleration factor = 2 over 12 slices. Other parameters were: FOV = 150 × 150 mm 2 , and 2 signal averages. MT weighting was achieved using a 20-ms, single-lobed sinc pulse with Gaussian apodization (Smith et al., 2014). High-resolution, single-point MT-data were acquired using the same parameters, except a nominal inplane resolution of 0.65 × 0.65 mm 2 was applied with six signal averages. To correct for B 1 and B 0 inhomogeneities across the spinal cord, B 1 and B 0 maps were acquired using fast 3D techniques; T 1 mapping was performed using a multiple flip angle (MFA) acquisition. A high-resolution multi-echo gradient echo (mFFE) scan was also performed and all echoes were averaged to generate a high grey/white matter contrast target image for registration (Held et al., 2003). A detailed list of the sequence parameters is included in Table 1. Table 1 Scan parameters and MT prepulse parameters for the high-resolution anatomical (mFFE), low-and high-resolution MT, B 1 , B 0 , and T 1 scans. Scan Resolution ( Image processing All data analyses were performed in MATLAB R2016a (Mathworks, Natick, MA). Prior to data fitting, all images were cropped to an area immediately around the SC and co-registered to the mFFE volume using the FLIRT package from FSL v5.0.2.1 (FMRIB, Oxford, UK) (Jenkinson et al., 2002;Jenkinson and Smith, 2001). The co-registration was applied on a slice by slice basis, utilized 2D rigid body registration, and was limited to translation and rotation ( ± 5°) in-plane only (i.e. translation in x and y, and rotation about the z-axis). Additionally, the registration used a normalized correlation search cost, and spline interpolation. Following co-registration, qMT parameter maps were generated for each volunteer and patient using the full-fit qMT model described in Yarnykh (2002) & Yarnykh and Yuan (2004). This model contains six independent parameters: R 1m , R 1f , T 2m , T 2f , PSR = M 0m / M 0f , and k mf = k fm /PSR (subscripts "f" and "m" represent "free water" and "macromolecular" pools, respectively). The R 1obs (1/T 1obs ) maps were independently reconstructed by fitting the MFA data to the SPGR signal equation in the steady-state (Fram et al., 1987); these maps were used during MT parameter estimation (below) to estimate the parameter R 1f (Yarnykh, 2002;Yarnykh and Yuan, 2004). Henkelman et al. (1993) &Morrison andHenkelman (1995) showed that the signal dependence on R 1m is weak; therefore, R 1m was set equal to the R 1f (Yarnykh, 2012). The remaining parameters were estimated for each voxel by fitting the full-fit qMT data to the above-described, two-pool MT model (Yarnykh, 2002;Yarnykh and Yuan, 2004). For all fitting, the nominal offset frequency and RF amplitudes were corrected in each voxel using B 0 and B 1 maps, respectively. It has been shown that k mf , T 2m , and the product T 2f R 1f can all be fixed during the fitting process because they all exhibit relatively constant values across tissues (Smith et al., 2014;Yarnykh, 2012). This results in a model with one free parameter, PSR, which can be estimated from qMT data at a single offset/power (plus a reference scan). To estimate reasonable fixed parameters values for the single-point qMT analysis, histograms of k mf , T 2m , and T 2f R 1f were created (see Section 2.3 below), and the median value of each parameter was chosen to enter into the single-point qMT analysis; the high-resolution MTweighted images were then analyzed to estimate high-resolution PSR maps. Single point constraints: controls vs patients To determine whether the healthy control-derived (CD) constraints (assumptions) were similar to those from the MS patient-derived (PD) constraints, mean parameter values for T 2f R 1f , k mf , and T 2m were calculated over each slice for each subject, and Kruskal-Wallis (nonparametric ANOVA) test was performed to evaluate if differences exist between cohorts for all estimated parameters. Next, to estimate reasonable fixed parameter values to enter the single-point qMT analysis, histograms of T 2f R 1f , k mf , and T 2m that were derived from the fullfit analysis were created over the SC for all slices and for each cohort of subjects. The skewness of each histogram was also calculated, where a positive skewness indicates right-skewed data, while a negative skewness indicates left-skewed data. Lastly, the median value of each parameter from T 2f R 1f , k mf , and T 2m in each cohort was determined. The calculated constraints (assumptions) from the control cohort were applied to the high-resolution control qMT data to derive the PSR maps. The calculated constraints from both cohorts (CD and PD) were used to estimate two sets of high-resolution PSR maps for the patient cohort. Tissue segmentation WM and grey matter (GM) tissues in the control cohort were segmented from the mFFE images using the multi-atlas segmentation tool (Asman and Landman, 2013) previously developed for mFFE acquisitions. Each GM/WM regions of interest (ROI) was eroded using the imerode tool (using a disk strel object with a radius of 1) from MATLAB to avoid partial volume effects and the mean PSR values from the single-point data were calculated voxel-by-voxel for each volunteer. An example segmentation is shown in Fig. 1a. Since the multi-atlas procedure does not account for lesions, WM lesion (WM-L), normal appearing white matter (NAWM), and normal appearing grey matter (NAGM) ROIs in the patient cohort were drawn manually by two independent raters on the high-resolution mFFE image (shown in Fig. 1b and c). ROIs were placed manually using MIPAV (NIH, Bethesda, MD) for each slice of each subject. The ROIs were then eroded as for the healthy control multi-atlas method to ensure only the NAWM, NAGM, and WM-Ls were identified with minimal confounding effects from partial volume effects. The mean single-point PSR was calculated for each subject and tissue type (NAWM, NAGM, WM-L). Statistical analysis of patient and control single point PSR Statistical comparisons were performed on the mean PSR values between i) raters for the patient cohort using the PSR estimated from the patient-derived constraints, ii) each tissue type in the patient cohort (NAWM, NAGM, and WM-Ls) using both sets of derived constraints (CD and PD), and iii) the healthy control cohort (WM and GM) and the patient cohort for each tissue type and set of constraints. A significance threshold of p < 0.05 was chosen for all statistical comparisons. The Wilcoxon rank-sum test was used for comparisons ii) and iii), while the Bland-Altman analysis (Bland and Altman, 1986), consisting of the normalized Bland-Altman difference (D BA ), 95% confidence interval for the difference, and the limits of agreement (1.96*SD of the difference across scans), was used for the inter-rater comparisons in i). Comparison of control and patient single point constraints The histograms derived from the low-resolution, full fit qMT analysis for k mf , T 2f R 1f , and T 2m over the whole cervical cord are shown in Fig. 2 for both the control and patient groups. None of the parameter estimates were found to be significantly different between controls and patients (p-values: k mf -0.149, T 2f R 1f -0.355, T 2m -0.576). All histograms are single-peaked and show similar skewness between cohorts the k mf (skewness: CD -2.11, PD -2.04) and T 2f R 1f (skewness: CD -2.40, PD -2.19) are skewed to the left with long tails at high values, while the T 2m presents little to no skew (skewness: CD -0.65, PD -0.50). The median values for the control and patient cohorts are: [8.76, 7.54] s − 1 , [0.0255, 0.0279], and [10.66, 10.51] μs for the k mf , T 2f R 1f , and T 2m , respectively, and are also shown in Table 3. High-resolution single point data Anatomical images, R 1obs maps, and PSR maps are displayed in Fig. 3 for a healthy control and a patient with MS, where the constraints derived from the patient population were used to generate the PSR values. Note that the contrast in the PSR is such that WM areas have a higher PSR value (yellow-red) than GM (green), while the CSF exhibits little to no MT effect (dark blue). The average T 1obs values for the healthy GM and WM are [GM: 1.37 ± 0.08 s, WM: 1.28 ± 0.08 s], while the average T 1obs values in the patient GM, NAWM, and lesions are: [GM: 1.49 ± 0.16 s, NAWM: 1.38 ± 0.14 s, Lesions: 1.49 ± 0.19 s]. Several differences can be appreciated when the PSR in the healthy control and patient are compared. In areas associated with a lesion on the anatomical image, we see a decrease in the PSR of the patient (0.11 ± 0.03); this is reduced compared to the NAWM (0.14 ± 0.04), and the control WM (0.18 ± 0.03) PSR. The Bland-Altman plots for the inter-rater comparison are displayed in Fig. 4. The 95% confidence intervals for all tissues overlap zero, indicating there are no significant differences between raters. Furthermore, the limits of agreement in the NAWM (Fig. 4a) and GM (Fig. 4b) are within ± 1.5%, which is within one standard deviation of the mean PSR over all patients (see Table 4). The lesions had the largest 95% confidence intervals and limits of agreement, indicating that the delineation of lesion boundaries produced the largest variation between raters. However, none of the tissue types were shown to be significantly different (p-values: NAWM -0.480, GM -0.07, WM-Ls -0.337); therefore, all subsequent analyses use only the ROIs taken from rater 1. Mean single-point PSR values for the healthy controls and MS patient groups are shown in Table 4. In all cases the healthy PSR (WM: 0.19 ± 0.02, GM: 0.16 ± 0.02) was found to be higher than the patient PSR (CD -NAWM: 0.15 ± 0.02, NAGM: 0.13 ± 0.02) for each respective tissue class. The lesion data displayed similar PSR values (CD WM-Ls: 0.13 ± 0.03) to the NAGM over all patients, and was in all cases lower than the PSR of the healthy WM and patient NAWM. The statistical comparisons between the healthy and patient cohorts from the Wilcoxon rank-sum test are displayed in Table 5 for WM. The WM in the healthy controls was significantly different from both CD and PD PSR for all tissue types (p-values < 0.01 for WM-Ls, p-values < 0.05 for NAWM). Additionally, the NAWM was found to be significantly different from lesions (p < 0.05) in all cases. The CD A.K. Smith et al. NeuroImage: Clinical 16 (2017) 58-65 NAGM PSR data was found to be significantly different from the healthy GM (p = 0.008), while the PD NAGM only approached a significant value. No significant differences were observed between the PD and CD PSR data in NAWM, NAGM, or WM-L (p > 0.15 in all cases). This demonstrates that the small differences seen in k mf did not significantly affect the single point model when it was applied to a patient cohort. Discussion The goal of this study was to evaluate how assumed constraints derived from a full qMT analysis, and applied to a single-point qMT method, are different in pathology, such as in MS. We compared the calculated constraints for each population (healthy controls and patients with MS) and evaluated the PSR across patients (using both CD and PD assumptions) in the NAWM, NAGM, and WM-Ls, as well as between MS patients and healthy controls. We demonstrated that the error observed in the full fit analysis for MS patients does not significantly bias the PSR calculations derived from the single point methodology. To the best of our knowledge, this is the first study to perform a full qMT analysis in the SC of patients with MS and evaluating the validity of the single-point assumptions. Therefore our results, although preliminary, show important and novel conclusions. The observed k mf in patients was found to be 7.54 s − 1 , which is lower than what has been observed in healthy controls (k mf = 8.76 s − 1 ). Additionally, the T 1obs was found to be higher in WM-Ls, compared to NAWM and healthy WM, which is expected, and incorporated into the two-pool qMT model. Therefore, our results suggest that the observed PSR changes are driven by macromolecular content rather than inflammation Schmierer et al., 2007). Nevertheless, one shall keep in mind that these changes may also be due to increased water content present in advanced WM-Ls where a high degree of tissue loss is present relative to other tissues, which would decrease both the PSR and k mf , and increase the T 1 . Laule et al. (2016) recently demonstrated that the myelin water fraction (MWF) is significantly decreased in postmortem lesions relatively to NAWM and patient GM, which may explain the observed changes in T 1 and exchange rate in patients with MS relative to healthy controls. A potential source of bias in the above analysis is caused by using median, whole-cord values for the single point constraints (e.g. k mf , T 2f R 1f , and T 2m ), as the heterogeneity present in the NAWM, NAGM, and WM-Ls may be removed. Therefore, ROIs were also drawn in the low-resolution data for each tissue type (NAWM, NAGM, and WM-Ls), and the mean parameter values were found for each subject and tissue type. The Wilcoxon rank-sum test was performed between each patient tissue type and the mean whole-cord control data. No significant differences were found between the tissue-specific patient data and the control data (p > 0.15 in all cases). This is an important characterization, as it further indicates that the k mf , T 2f R 1f , and T 2m are not sensitive to MS-induced changes, signifying that the PSR alone is an indicator of macromolecular changes in the CNS. The high resolution PSR data in Table 5 and Fig. 3 demonstrated significant differences between healthy WM PSR and NAWM, NAGM, and WM-Ls PSR in patients. Furthermore, the changes due to exchange rate seen from the full fit data do not present a significant difference in PSR between the CD and PD patient data. This indicates that there may be pathological changes to the neurological tissues that cannot be visualized in the anatomical data. As MT has been shown to visualize the underlying macromolecular tissue dynamics within tissues Ou et al., 2009a;Schmierer et al., 2007), these changes may reflect underlying neurological tissue changes that occur prior to more overt radiological symptoms seen in conventional imaging methodologies. An important aspect of the PSR is its sensitivity to macromolecular- A.K. Smith et al. NeuroImage: Clinical 16 (2017) 58-65 induced changes in WM. Therefore, detecting WM changes along the spinal cord would provide substantial benefits to researchers and clinicians, as the PSR could then be used to track how lesions may be affecting NAWM caudal to the lesion site. To this end, we compared how the PSR changed along the spinal cord in the patient cohort. We used the Kruskal-Wallis test (nonparametric ANOVA) to determine if the variation along the spinal cord was greater than the variation between patients for both NAWM and NAGM. Although the NAGM did not show significant differences (p = 0.26), significant differences were observed for the NAWM (p < 0.05). In particular, near C2/C3, the NAWM PSR was approximately 0.161 ± 0.024, while the NAWM PSR was approximately 0.127 ± 0.026 near C5/C6. We have demonstrated previously (Smith et al., 2014) that the PSR does not display regional variations in healthy volunteers. Therefore, the PSR may be detecting regional changes in the macromolecular content due to pathology. However, further research utilizing a larger patient population is needed in order to confirm this hypothesis. GM demyelination has recently become a topic of interest when evaluating radiological and clinical deficits associated with MS. Gilmore et al. (2006) reported evidence of GM demyelination in postmortem studies of patients with MS, and found a significantly greater proportion of demyelinated GM compared with WM. Kearney et al. (2015bKearney et al. ( , 2013 reported GM involvement in multiple studies; this suggests that GM may be significantly affected by MS in the SC. Here, we confirm these post mortem findings with our in vivo measurements. NAGM in patients had significantly different qMT-derived measures than that of healthy persons. The patient group introduced in this work was on average 15 years older than the respective control group, which may have contributed to some of the differences between the WM and NAWM. Indeed, a study has shown age-related changes in the MTR (Ge et al., 2002), which may translate to the PSR. While this study demonstrated that there were no significant differences between the CD and PD single point qMT constraints, there may be changes in the PSR due to age, which may bias any conclusions that are drawn from the data. Therefore, future studies should seek to compare patient data to age-matched control data to ensure changes are due to pathology alone. Although the patients in this study presented with multiple focal lesions, their clinical disability scores were fairly low. While the highest Expanded Disability Status Scale (EDSS) score in the patient cohort was 5, most of the patients had EDSS scores of 1 or 0, which biases correlations that could be performed. Addressing the impact of our qMTderived measures on clinical disability was outside the scope of this work which was not powered towards that. Nevertheless, this is the next logical continuation of this study. Our group is already expanding the work to a larger cohort of MS patients with a more heterogeneous clinical expression of the disease to assess the clinical implications of our imaging findings. Conclusions This study demonstrated that a set of control-derived constraints can be used to accurately map PSR data in patients with MS. Our results also demonstrate that the PSR is an important tool to quantify MS, and may provide a more stable measure of the effects of demyelination and axonal damage than can be provided through conventional imaging alone. Developing clinically-oriented metrics to quantify tissue pathologies may offer additional insights into disease diagnosis and progression.
Estimation of Extreme Significant Wave Height in the Northwest Pacific Using Satellite Altimeter Data Focused on Typhoons (1992–2016) : The estimation of extreme ocean wave heights is important for understanding the ocean’s response to long-term changes in the ocean environment and for the effective coastal management of potential disasters in coastal areas. In order to estimate extreme wave height values in the Northwest Pacific Ocean, a 100-year return period were calculated by applying a Peak over Threshold (PoT) method to satellite altimeter SWH data from 1992 to 2016. Satellite altimeter SWH data were validated using in situ measurements from the Ieodo Ocean Research Station (IORS) south of Korea and the Donghae buoy of the Korea Meteorological Administration (KMA) off the eastern coast of Korea. The spatial distribution and seasonal variations of the estimated 100-year return period SWHs in the Northwest Pacific Ocean were presented. To quantitatively analyze the suitability of the PoT method in the Northwest Pacific, where typhoons frequently occur, the estimated 100-year return period SWHs were compared by classifying the regions as containing negligible or significant typhoon effects. Seasonal variations of extreme SWHs within the upper limit of 0.1% and the PoT-based extreme SWHs indicated the effect of typhoons on the high SWHs in the East China Sea and the southern part of the Northwest Pacific during summer and fall. In addition, this study discusses the limitations of satellite altimeter SWH data in the estimation of 100-year extreme SWHs. Introduction Tropical cyclones are accompanied by heavy rainfall, unusually high waves and strong winds, all of which have a great impact on the coastal environment. The intensity of tropical cyclones or the frequency of the most intense cyclones has increased as an effect of climate change [1,2]. The intensity of tropical cyclones is increasing locally as a result of the changes in the cyclones' trajectories and the location of the maximum intensity [3][4][5]. The extreme significant wave height (SWH) is increasing globally as well as regionally, especially in coastal regions [6][7][8][9][10]. Increases in extreme wave height caused by tropical cyclones and hazardous events, combined with predicted sea-level rises (e.g., [11][12][13]), have the potential to increase the magnitude of disasters along with coastal erosion. Therefore, it is very important to estimate the extreme wave height, such as the 100-year return period SWH, as well as to understand the wave variability over decades. It is important to develop sophisticated statistical techniques for estimating extreme wave heights and to understand their spatio-temporal distribution in the global ocean and regional sea. In addition, it is also important to study the applicability and limitations of various estimation techniques that have been developed and utilized in previous studies. In particular, it is valuable to verify that long-term return period SWHs is adequately estimated in seas where extreme events such as typhoons frequently affect the estimation of extreme wave height. The Northwest Pacific has a variety of ocean and atmospheric phenomena that cause the spatial and temporal variability of SWH, as shown in the long-term mean of satellite SWH data from 1992 to 2016 (Figure 1a,b). Previous studies have reported an increasing trend for SHW as well as extreme SWH values corresponding to the upper 1% in the Northwest Pacific (e.g., [36]). In addition, as shown in Figure 1c,d, the Northwest Pacific is a region with one of the highest frequencies of high-intensity tropical cyclones [37]. On average, more than 28 tropical cyclones occur per year [38], and the frequency of tropical cyclones accounts for 30% of the total global tropical cyclones [39]. Therefore, the Northwest Pacific is very suitable for the estimation of the 100-year return period SWH because of the relative abundance of extreme events such as tropical cyclones. Thus, it is important to understand the characteristics of extreme wave height in regions affected by tropical cyclones. Although the spatial distribution of the 100-year return period of SWH in the Northwest Pacific region has been presented [40,41], few studies have been conducted on the effect of tropical cyclones on extreme SWH. Method (IDM), Annual Maximum (AM) method, and Peaks over Threshold (PoT) method have been developed over the past few decades (e.g., [14][15][16][17]). Studies have estimated the extreme wave height in the global ocean and regional seas by applying these statistical methods to buoy measurements and shipborne wave recorder data [7,[18][19][20][21][22][23], satellite observation data [24][25][26][27][28][29][30], or model simulation data [31][32][33][34][35]. It is important to develop sophisticated statistical techniques for estimating extreme wave heights and to understand their spatio-temporal distribution in the global ocean and regional sea. In addition, it is also important to study the applicability and limitations of various estimation techniques that have been developed and utilized in previous studies. In particular, it is valuable to verify that long-term return period SWHs is adequately estimated in seas where extreme events such as typhoons frequently affect the estimation of extreme wave height. The Northwest Pacific has a variety of ocean and atmospheric phenomena that cause the spatial and temporal variability of SWH, as shown in the long-term mean of satellite SWH data from 1992 to 2016 (Figure 1a,b). Previous studies have reported an increasing trend for SHW as well as extreme SWH values corresponding to the upper 1% in the Northwest Pacific (e.g., [36]). In addition, as shown in Figure 1c,d, the Northwest Pacific is a region with one of the highest frequencies of high-intensity tropical cyclones [37]. On average, more than 28 tropical cyclones occur per year [38], and the frequency of tropical cyclones accounts for 30% of the total global tropical cyclones [39]. Therefore, the Northwest Pacific is very suitable for the estimation of the 100-year return period SWH because of the relative abundance of extreme events such as tropical cyclones. Thus, it is important to understand the characteristics of extreme wave height in regions affected by tropical cyclones. Although the spatial distribution of the 100-year return period of SWH in the Northwest Pacific region has been presented [40,41], few studies have been conducted on the effect of tropical cyclones on extreme SWH. The objective of this study is to investigate and verify the extreme wave heights using satellite altimeter-observed data in the Northwest Pacific, where typhoons and extratropical storms occur frequently. This is achieved by (1) validating the satellite altimeter data using in situ measurements; (2) applying the EVA scheme commonly used to estimate the extreme wave heights; (3) comparing the estimated extreme wave heights with the maximum SWH measurements; and (4) investigating the difference in extreme wave height The objective of this study is to investigate and verify the extreme wave heights using satellite altimeter-observed data in the Northwest Pacific, where typhoons and extratropical storms occur frequently. This is achieved by (1) validating the satellite altimeter data using in situ measurements; (2) applying the EVA scheme commonly used to estimate the extreme wave heights; (3) comparing the estimated extreme wave heights with the maximum SWH measurements; and (4) investigating the difference in extreme wave height characteristics between typhoon region and non-typhoon region in the Northwest Pacific. This study also aims to discuss the limitations and precautions for estimating the extreme wave height Remote Sens. 2021, 13, 1063 3 of 19 using satellite altimeter data and to demonstrate the necessity of in situ data to verify the satellite-derived extreme wave heights. Satellite Data In this study, the altimeter SWH data provided by Institut Français de Recherche pour l'Exploitation de la Mer (IFREMER) from January 1992 to December 2016 (25 years) were utilized [42]. This database is composed of nine altimeters (European Remote Sensing-1 (ERS-1), Topography Experiment/Poseidon (TOPEX/Poseidon), European Remote Sensing-2 (ERS-2), Geosat Follow-On (GFO), Joint Altimetry Satellite Oceanography Network-1 (Jason-1), Environmental Satellite (Envisat), Joint Altimetry Satellite Oceanography Network-2 (Jason-2), Cryosat-2, and Satellite for Argos and Altika (SARAL)) data over the study period, as shown in Table 1. The altimeter SWH data used in this study were quality controlled with along-track data. In addition, to improve accuracy and consistency, corrections of each altimeter SWH data were also performed by Queffeulou and Croizé-Fillon [42] by the comparison of satellite data with in situ measurements or an intercomparison between altimeter data. In the Northwest Pacific, these altimeter SWH data were validated to be about 0.1 m in terms of bias and 0.3 m in terms of the root-mean-square error (RMSE) [36]. To investigate if satellite SWH data could be measured near the typhoon center under a condition of severe sea states, we used 10.8 µm channel infrared images of the Communication, Ocean, and Meteorological Satellite/Meteorological Imager (COMS/MI) for a period from 26 to 27 August 2012. During this period, serval satellite altimeter tracks passed over the typhoon Bolaven in the seas around the Korean Peninsula, and satellite SWHs could be obtained for the typhoon event. In-Situ Data The Ieodo Ocean Research Station (IORS), located at 125.18 • E, 32.12 • N in the East China Sea and south of the Korean Peninsula, has been operating since 2003 [43]. It was constructed on an underwater rock with a depth of approximately 40 m. It is far from land or islands, approximately 149 km southwest of Marado, Korea, 276 km from the west of Dorisima, Japan and 287 km from the nearest island of China, as shown in Figure 2b. The SWH measurements from IORS were used from 2005 to 2016 to evaluate the accuracy of the altimeter SWH data in regions with the greatest frequency of extreme conditions such as typhoons. The tracks of satellite altimeters around IORS are presented in Figure 2c-k, which contains all the regular tracks and transitional tracks during the study period from 1992 to 2016. Dorisima, Japan and 287 km from the nearest island of China, as shown in Figure 2b. The SWH measurements from IORS were used from 2005 to 2016 to evaluate the accuracy of the altimeter SWH data in regions with the greatest frequency of extreme conditions such as typhoons. The tracks of satellite altimeters around IORS are presented in Figure 2c-k, which contains all the regular tracks and transitional tracks during the study period from 1992 to 2016. As IORS is a tower, the wave heights are observed using a radar instrument on a platform-based remote sensing system, in contrast to the conventional buoy using an accelerometer to measure the wave height. The observed SWH data might contain abnormal As IORS is a tower, the wave heights are observed using a radar instrument on a platform-based remote sensing system, in contrast to the conventional buoy using an accelerometer to measure the wave height. The observed SWH data might contain abnormal values due to various atmospheric and marine environments and instrumental errors. The Korea Hydrographic and Oceanographic Agency, which distributes the observation data from IORS, has developed a series of quality control procedures for wave height measurements based on the techniques presented by the Intergovernmental Oceanographic Commission [44,45] and Evans et al. [46]. The agency distributes the SWH data from IORS with quality control information. In this study, quality controlled data at 1-h interval with quality flags were used for analysis. In addition, the SWH measurements from the Donghae buoy of the Korea Meteorological Administration (KMA) were utilized. The buoy is located relatively far from the coast (129.95 • E, 37.54 • N), in an area with less influence from typhoons than IORS. The data were collected from 2011 to 2016. The SWH measurements at 1-h interval from Donghae buoy were also quality controlled by applying quality flags provided with the data. As shown in Figure 3a, SWHs higher than 8 m were observed at IORS. The monthly average is a mixture of seasonal variation in which the SWH increases in winter and decreases in summer, and the highest SWH in August due to frequent passage of typhoons (Figure 3c,e). The SWHs from Donghae buoy show more pronounced seasonal variability than those of IORS (Figure 3d,f), but they also exhibit high values exceeding 4.5 m due to typhoons in August-September. Another difference from the SWH measurements from IORS is that there is a peak in April. This is related to the fast-developed low-pressure passing through the East Sea (Sea of Japan) [47,48]. In addition, the SWH measurements from the Donghae buoy of the Korea Meteorological Administration (KMA) were utilized. The buoy is located relatively far from the coast (129.95°E, 37.54°N), in an area with less influence from typhoons than IORS. The data were collected from 2011 to 2016. The SWH measurements at 1-h interval from Donghae buoy were also quality controlled by applying quality flags provided with the data. As shown in Figure 3a, SWHs higher than 8 m were observed at IORS. The monthly average is a mixture of seasonal variation in which the SWH increases in winter and decreases in summer, and the highest SWH in August due to frequent passage of typhoons (Figure 3c,e). The SWHs from Donghae buoy show more pronounced seasonal variability than those of IORS (Figure 3d,f), but they also exhibit high values exceeding 4.5 m due to typhoons in August-September. Another difference from the SWH measurements from IORS is that there is a peak in April. This is related to the fast-developed low-pressure passing through the East Sea (Sea of Japan) [47,48]. Best Track Data of Typhoons Information regarding the occurrence and movement of typhoons from 1992 to 2016 in the study area was obtained from the International Best Track Archive for Climate Stewardship (IBTrACS) version 3 release 10 (v03r10) [49]. The number of typhoons during the study period was calculated in bins of 2 • × 2 • and used as an index to evaluate whether extreme conditions such as typhoons were sufficiently represented in the EVA results. Estimation of Extreme Value In order to understand the spatial distribution of the extreme wave height using the EVA in the Northwest Pacific, SWH data measured along satellite altimeter tracks were sub-sampled within a bin of 2 • × 2 • [24,28,50]. In this study, the PoT method was utilized to estimate extreme SWHs using buoy measurements and satellite altimeter data ( Figure 4). The PoT method, which uses data exceeding a defined threshold, alleviates the limitations of the AM method while maintaining data as being independent and identically distributed. The data greater than the threshold follow a generalized Pareto distribution (GPD) in the EVA [17]: where µ is the location parameter, σ is the scale parameter, and ξ is the shape parameter. As the threshold µ is a factor that affects the result of EVA, it is of considerable importance to select an appropriate threshold value. Too low a threshold causes the estimated extreme value to have a low bias, while too high a threshold does not maintain stability because of excessive suppression of the amount of data [17,21]. Therefore, the stability of parameters should be tested. The parameters were estimated for every value of the threshold [17]. For the selection of the threshold, the shape parameter (ξ) and modified scale parameter (σ * = σ − ξµ) were presented in Figure 5. Based on the results of the stability test, the 99.5th percentile SWH suggested by Méndez et al. [21] was selected among the various threshold values proposed in previous studies [21,24,25,28]. ) with 95% confidence intervals for every value of threshold. The red dot indicates the 99.5th percentile SWH. Validation of Satellite SWH Data The altimeter SWH data were validated by comparison with the SWH measurements from the IORS and Donghae buoy prior to the estimation of the extreme SWH value. Figure 6a,b present a comparison between the altimeter SWH data and the in situ measurements at the IORS and the Donghae buoy. When the collocation procedure between altimeter along-track SWH data and in situ measurements was performed using the criteria of 30 min and 50 km in time and space, the numbers of the matchup data produced at the IORS and Donghae buoy were 833 and 556, respectively. The SWH data from the altimeters were in good agreement with both SWH measurements from IORS and Donghae ) with 95% confidence intervals for every value of threshold. The red dot indicates the 99.5th percentile SWH. Validation of Satellite SWH Data The altimeter SWH data were validated by comparison with the SWH measurements from the IORS and Donghae buoy prior to the estimation of the extreme SWH value. Figure 6a,b present a comparison between the altimeter SWH data and the in situ measurements at the IORS and the Donghae buoy. When the collocation procedure between altimeter along-track SWH data and in situ measurements was performed using the criteria of 30 min and 50 km in time and space, the numbers of the matchup data produced at the IORS and Donghae buoy were 833 and 556, respectively. The SWH data from the altimeters were in good agreement with both SWH measurements from IORS and Donghae To perform the PoT analysis, it was hypothesized that the data should satisfy the precondition of independence between the data. However, a satellite altimeter along the track would be likely to observe similar SWH values within a bin, which may violate the precondition. Taking this into account, we selected one maximum value for each track of the satellite within a given bin. Nevertheless, other satellites were likely to undermine this independence by observing similar SWHs within a given spatial range. As such, an additional constraint was given to use a temporal range for the selection of only one maximum value among the previously selected maximums. Previous studies have mentioned that the independence of data could be ensured by separating these data into specific time intervals such as two days [35,51] or three days [21]. In this study, data separated at three-day time intervals were used for the EVA. In the PoT method, the probability level for the 100-year return period SWH is determined as follows: where N PoT is the number of exceedances used in the PoT analysis, and N Y is the number of years covered by the analysis. As mentioned above, the PoT method complements the defects of the annual maximum method, but also has a limitation in that this method shows an extreme dependency on the threshold, and variations of the estimated extreme value were amplified if insufficient data were available for this approach. Validation of Satellite SWH Data The altimeter SWH data were validated by comparison with the SWH measurements from the IORS and Donghae buoy prior to the estimation of the extreme SWH value. Figure 6a,b present a comparison between the altimeter SWH data and the in situ measurements at the IORS and the Donghae buoy. When the collocation procedure between altimeter along-track SWH data and in situ measurements was performed using the criteria of 30 min and 50 km in time and space, the numbers of the matchup data produced at the IORS and Donghae buoy were 833 and 556, respectively. The SWH data from the altimeters were in good agreement with both SWH measurements from IORS and Donghae buoy. The comparison resulted in RMSE of 0.38 m at the IORS and 0.23 m at Donghae buoy, respectively. Compared with the SWH measurement from IORS, it was noted that there was a tendency for the altimeter SWH data to be slightly overestimated with a positive bias error of 0.23 m. This tendency was found to be due to a mixture of errors caused by the characteristics of the IORS platform, which observes the wave height using a microwave instrument at a height of 35 m in addition to satellites [52]. In contrast, the altimeter SWH data showed an insignificant bias of 0.01 m in comparison with the measurements from the Donghae buoy. The IORS and the Donghae buoy measure SWH data with a high temporal resolution of approximately 1 h at a point location, while the altimeter data are sparsely distributed with different temporal differences. Therefore, the SWH data may differ from each other depending on these observation characteristics. Considering the differences, monthly averaged values and the maximum values of the SWHs were calculated for both in situ measurements and satellite data. In the case of altimeter SWH data, the mean and maximum values were calculated using along-track data within a bin of 2 • × 2 • based on the location of the IORS and the Donghae buoy. Figure 6c-f indicates the comparisons of the monthly means of wave heights from the in situ observation stations and satellite observations. Overall, both the average and maximum SWHs observed from satellites and point stations were in good agreement for the entire period. Two large peaks of more than 10 m in the monthly maximum ( Figure 6d) were generated by the typhoons Muifa and Bolaven, which passed over the IORS in August 2011 and August 2012, respectively. Similar values were measured for both the IORS and the altimeter. However, the maximum SWH obtained from the IORS showed a peak in August 2010 when Kompasu passed, while the maximum SWH from the altimeter did not show these characteristics. When comparing the monthly maximum SWH observed from Donghae buoy and altimeters, some peaks measured in the Donghae buoy did not appear in the altimeter observations (Figure 6f). This suggests that the altimeters failed to observe the extreme waves generated by storms that occurred suddenly in the East Sea (Sea of Japan). This is related to the fast movement of the typhoons at a relatively high speed and the fast-developed low-pressure [47,48]. Remote Sens. 2021, 13, x FOR PEER REVIEW 8 of 20 The IORS and the Donghae buoy measure SWH data with a high temporal resolution of approximately 1 h at a point location, while the altimeter data are sparsely distributed with different temporal differences. Therefore, the SWH data may differ from each other depending on these observation characteristics. Considering the differences, monthly averaged values and the maximum values of the SWHs were calculated for both in situ measurements and satellite data. In the case of altimeter SWH data, the mean and maximum values were calculated using along-track data within a bin of 2° × 2° based on the location of the IORS and the Donghae buoy. Figure 6c-f indicates the comparisons of the monthly means of wave heights from the in situ observation stations and satellite observations. Overall, both the average and maximum SWHs observed from satellites and point stations were in good agreement for the entire period. Two large peaks of more than 10 m in the monthly maximum ( Figure 6d) were generated by the typhoons Muifa and Bolaven, which passed over the IORS in August 2011 and August 2012, respectively. Similar values were measured for both the IORS and the altimeter. However, the maximum SWH obtained from the IORS showed a peak in August 2010 when Kompasu passed, while the maximum SWH from the altimeter did not show these characteristics. When comparing the monthly maximum SWH observed from Donghae buoy and altimeters, some peaks measured in the Donghae buoy did not appear in the altimeter observations (Figure 6f). This suggests that the altimeters failed to observe the extreme waves generated by storms Figure 7 shows the results of extreme SWH estimates by applying the PoT method using the GPD to the satellite observed SWH data from the sea surrounding in situ measurement stations of the IORS and the Donghae buoy. The small plot on the upper right side of each figure presents a comparison between the quantile of the observed data and the quantile of the PoT estimated value of altimeter-observed SWH data from the seas surrounding the IORS and Donghae buoy. Since relatively high SWH data were selected by excluding the SWHs smaller than the threshold value, as shown in red curves of Figure 7, the fitting of GPD was expected to represent the characteristics of the distribution of extreme wave heights. The quantiles estimated using the determined probability density function (PDF) were in good agreement with the observed quantiles from altimeter data near the IORS and the Donghae buoy. The estimated 25-year and 50-year return period SWHs were 11.83 m and 13.95 m for the satellite SWH data near the IORS, respectively ( Table 2). The 100-year return period SWHs were estimated to be 16.49 m for the altimeter data near the IORS. The estimated extreme SWHs were higher than the maximum SWHs with difference values of 3.66 m and 0.20 m around the IORS and the Donghae buoy, respectively. Considering that the estimated 100-year return period SWHs were higher than the maximum values, it may be reasonable to estimate extreme SWH by applying the PoT method for the study area. Estimation of Extreme SWH Using PoT Method ( Table 2). The 100-year return period SWHs were estimated to be 16.49 m for the altimeter data near the IORS. The estimated extreme SWHs were higher than the maximum SWHs with difference values of 3.66 m and 0.20 m around the IORS and the Donghae buoy, respectively. Considering that the estimated 100-year return period SWHs were higher than the maximum values, it may be reasonable to estimate extreme SWH by applying the PoT method for the study area. Figure 8 shows the estimated SWHs, marked red lines, as a function of the return periods from 1 year to 100 years using satellite data near the IORS and the Donghae buoy. Figure 8 shows the estimated SWHs, marked red lines, as a function of the return periods from 1 year to 100 years using satellite data near the IORS and the Donghae buoy. The dash lines represent the upper and lower limits of the estimated SWHs within 95% confidence interval. The confidence interval was calculated by using variance-covariance matrix and the delta method as described in [17]. In the sea near the IORS, the satellitebased extreme SWH at 100-year return period was relatively large of 16.49 m with a confidence interval of approximately 5.04 m. In the region near the Donghae buoy in the eastern coastal region of the Korean Peninsula with deep water depth, however, the 100-year return period of SWH was 7.43 m with the upper and lower limits of the estimated return level amounted to 7.94 m and 6.93 m. confidence interval. The confidence interval was calculated by using variance-covariance matrix and the delta method as described in [17]. In the sea near the IORS, the satellitebased extreme SWH at 100-year return period was relatively large of 16.49 m with a confidence interval of approximately 5.04 m. In the region near the Donghae buoy in the eastern coastal region of the Korean Peninsula with deep water depth, however, the 100-year return period of SWH was 7.43 m with the upper and lower limits of the estimated return level amounted to 7.94 m and 6.93 m. Spatial Distribution of Extreme Significant Wave Heights from Altimeter Data To understand the spatial distribution of extreme SWHs in the Northwest Pacific, the 100-year return period SWHs within a bin of 2° × 2° were estimated using the PoT method ( Spatial Distribution of Extreme Significant Wave Heights from Altimeter Data To understand the spatial distribution of extreme SWHs in the Northwest Pacific, the 100-year return period SWHs within a bin of 2 • × 2 • were estimated using the PoT method ( Seasonal Variability of Mean and Maximum Significant Wave Heights To understand the differences between the 100-year extreme SWHs and the mean SWHs over the past decades, we investigated the monthly distributions of SWHs for the period of 1992 to 2016, which are shown in Figure 10 In January, the SWHs were still high but began to decrease by less than 3 m in the northeastern region. In the spring, from March to May, the SWHs were remarkably reduced to less than 1 m in the marginal seas. This tendency of relatively small SWHs (<1 m) in the marginal seas lasted from summer June to August. Therefore, the spatial distribution of the 100-year return period SWHs with the PoT analysis in Figure 9 could be asserted to reflect the characteristics of the extreme conditions in the northeastern part of winter. Considering the relatively small SWHs in the East China Sea in summer, the extreme SWHs in this region originate from the activity of typhoons rather than ordinary SWHs in summer. As mentioned previously, the mean SWHs were relatively small, at less than 1.5 m in the East China Sea in summer. This yielded high differences from the 100-year return period SWHs in this region. Accordingly, the seasonal variations of extreme SWHs within the upper 0.1% were examined, as shown in Figure 11. Similar to the seasonal mean of the SWHs, the extreme SWHs also represented the highest values, of up to 14 m, in the eastern part in winter from December to February (Figure 11d). One of the most remarkable differences between the means of all SWHs ( Figure 10) and the extreme SWHs within the upper 0.1% (Figure 11) was found in summer and fall. There were high values of extreme SWHs amounting to 8-10 m in the East China Sea and south of Japan in the Northwest Pacific. This implies that some of the high SWHs occur seldomly but appear with extremely high wave heights in summer and fall. This suggests a potential role for tropical storms in the regions during summer and fall. Remote Sens. 2021, 13, x FOR PEER REVIEW 12 of 20 reflect the characteristics of the extreme conditions in the northeastern part of winter. Considering the relatively small SWHs in the East China Sea in summer, the extreme SWHs in this region originate from the activity of typhoons rather than ordinary SWHs in summer. As mentioned previously, the mean SWHs were relatively small, at less than 1.5 m in the East China Sea in summer. This yielded high differences from the 100-year return period SWHs in this region. Accordingly, the seasonal variations of extreme SWHs within the upper 0.1% were examined, as shown in Figure 11. Similar to the seasonal mean of the SWHs, the extreme SWHs also represented the highest values, of up to 14 m, in the eastern part in winter from December to February (Figure 11d). One of the most remarkable differences between the means of all SWHs ( Figure 10) and the extreme SWHs within the upper 0.1% (Figure 11) was found in summer and fall. There were high values of extreme SWHs amounting to 8-10 m in the East China Sea and south of Japan in the Northwest Pacific. This implies that some of the high SWHs occur seldomly but appear with extremely high wave heights in summer and fall. This suggests a potential role for tropical storms in the regions during summer and fall. Seasonal Variation of 100-Year Return Significant Wave Heights The relatively weak seasonal variation of the estimated extreme 100-year return SWH using the PoT analysis was found in the Northwest Pacific ( Figure 12). The spatial average of the 100-year return period SWH with the PoT analysis was estimated to be about 10 m in the winter. In contrast, it was calculated to be about 8.3 m in the summer, which was approximately 1 m higher than the maximum values of the upper 0.1% of SWHs ( Figure Figure 11. Seasonal distribution of significant wave heights from satellites within the upper 0.1% in the Northwest Pacific in (a) spring (March, April, and May), (b) summer (June, July, and August), (c) fall (September, October, and November), and (d) winter (December, January, and February) for the period of 1992 to 2016. Seasonal Variation of 100-Year Return Significant Wave Heights The relatively weak seasonal variation of the estimated extreme 100-year return SWH using the PoT analysis was found in the Northwest Pacific ( Figure 12). The spatial average of the 100-year return period SWH with the PoT analysis was estimated to be about 10 m in the winter. In contrast, it was calculated to be about 8.3 m in the summer, which was approximately 1 m higher than the maximum values of the upper 0.1% of SWHs ( Figure 11). The overall results of the PoT analysis showed a similar spatial pattern to the upper 0.1% of SWHs, but due to the limited data and high spatial variability, some abnormal values can be seen in the study area. In winter and spring, extreme SWHs had a tendency to be high at relatively high latitudes (>40 • N). As described earlier, the highest extreme SWH region (>10 m) in the East China Sea during summer (Figure 12b) seemed to be due to the effect of typhoons (Figure 1c,d). The effects of extreme conditions and latitudinal tendencies were also detected in the fall (Figure 12c). Effect of Tropical Cyclones on the Estimation of Extreme SWH In the previous sections, it was hypothesized that high PoT-based SWHs in summer were associated with typhoons. To clarify whether these are indeed related to typhoons, we classified all the spatial grids into two regions of typhoon and non-typhoon by applying a limit to the number of typhoons (N = 10) in each bin. Figure 13a shows the histogram of the differences between PoT-derived extreme SWHs and the maximum SWH in the typhoon region with a frequency of greater than 10, corresponding to a cumulative percentage of typhoon passage frequency of 60% approximately as shown in Figure 1c,d. The differences between the 100-year return SWH obtained from the PoT analysis and the maximum SWHs showed positive values in nearly all regions (>99.5%) regardless of the number of typhoons, which indicated that the characteristics of extreme conditions were appropriately reflected in the EVA (Figure 13a,d). The maximum count was found at approximately 4 m, which was similar to that of non-typhoon regions, as shown in Figure 13d. Although the maximum frequency appeared at a difference of the SWHs in both typhoon and non-typhoon regions, their occurrence months, with a high frequency of high SWHs within the upper 0.1%, differed as they appeared in August to October in the case of typhoon regions (Figure 13b) and in December to February in the case of non-typhoon regions (Figure 13e). In total, 56.6% of the typhoon regions appeared to present the maximum SWH between August and October, whereas 67.5% of the non-typhoon region showed the maximum value in winter. The PoT-derived SWHs showed a high frequency in fall (from September to November) in the typhoon region, while the maximum fre- Effect of Tropical Cyclones on the Estimation of Extreme SWH In the previous sections, it was hypothesized that high PoT-based SWHs in summer were associated with typhoons. To clarify whether these are indeed related to typhoons, we classified all the spatial grids into two regions of typhoon and non-typhoon by applying a limit to the number of typhoons (N = 10) in each bin. Figure 13a shows the histogram of the differences between PoT-derived extreme SWHs and the maximum SWH in the typhoon region with a frequency of greater than 10, corresponding to a cumulative percentage of typhoon passage frequency of 60% approximately as shown in Figure 1c,d. The differences between the 100-year return SWH obtained from the PoT analysis and the maximum SWHs showed positive values in nearly all regions (>99.5%) regardless of the number of typhoons, which indicated that the characteristics of extreme conditions were appropriately reflected in the EVA (Figure 13a,d). The maximum count was found at approximately 4 m, which was similar to that of non-typhoon regions, as shown in Figure 13d. Although the maximum frequency appeared at a difference of the SWHs in both typhoon and nontyphoon regions, their occurrence months, with a high frequency of high SWHs within the upper 0.1%, differed as they appeared in August to October in the case of typhoon regions ( Figure 13b) and in December to February in the case of non-typhoon regions (Figure 13e). In total, 56.6% of the typhoon regions appeared to present the maximum SWH between August and October, whereas 67.5% of the non-typhoon region showed the maximum value in winter. The PoT-derived SWHs showed a high frequency in fall (from September to November) in the typhoon region, while the maximum frequency appeared in winter (December to February) in the non-typhoon region, as shown in Figure 13c The distributions of SWHs showed considerable differences between the typhoon region and in the non-typhoon region. In order to investigate the characteristics of SWH distribution in the typhoon and non-typhoon region in more detail, two points representing the typhoon (128°E , 30°N ) and non-typhoon (174°E , 52°N ) region were selected, and the PDFs were determined by PoT analysis (Figure 14). In the selected point representing the typhoon region (128°E , 30°N ), about 30 typhoons passed over the study period. As expected, the quantiles estimated by the PoT analysis showed relatively good agreement with the observed quantiles (Figure 14a). In addition, as shown in Figure 14b, the PoT analysis estimated appropriate extreme SWHs in the point representing the non-typhoon region. With the accumulation of satellite-observed SWH data over several decades, it can also be suggested that the PoT analysis can be used to estimate more reliable extreme SWHs in the Northwest Pacific. The distributions of SWHs showed considerable differences between the typhoon region and in the non-typhoon region. In order to investigate the characteristics of SWH distribution in the typhoon and non-typhoon region in more detail, two points representing the typhoon (128 • E, 30 • N) and non-typhoon (174 • E, 52 • N) region were selected, and the PDFs were determined by PoT analysis (Figure 14). In the selected point representing the typhoon region (128 • E, 30 • N), about 30 typhoons passed over the study period. As expected, the quantiles estimated by the PoT analysis showed relatively good agreement with the observed quantiles (Figure 14a). In addition, as shown in Figure 14b, the PoT analysis estimated appropriate extreme SWHs in the point representing the non-typhoon region. With the accumulation of satellite-observed SWH data over several decades, it can also be suggested that the PoT analysis can be used to estimate more reliable extreme SWHs in the Northwest Pacific. Discussion Although extreme SWHs from PoT analysis are evaluated reliably in the Northwest Pacific, where typhoons are frequently generated, this method may also have some limitations. Figure 15 shows the 11 μm channel image of COMS/MI from August 26 to 27, 2012, when the typhoon Bolaven transected the Korean Peninsula. In Figure 15, all the altimeter data along the tracks of SWH observations were within 3 h of the COMS/MI observation time. Typhoon Bolaven passed near the IORS at 15 UTC on August 27 2012, and an enormous SWH with a peak value of 11.1 m was measured by the instrument from the IORS (Figure 3). At nearly the same time, the altimeter observed an SWH of about 12 m while passing through the center of the typhoon (Figure 15g). As may be observed from the time series of the altimeter observations in Figure 15, the altimeters did not obtain data for the entirety of the storm, although all satellite altimeter data from Jason-1/2 and Cryosat-2 were collected. Therefore, it is highly plausible that extreme SWHs may be underestimated in regions in which the altimeter is unable to observe the high SWHs generated by storms. The observation limit of the altimeter is reported to be sensitive to PoT analysis [28]. In order to calculate the extreme wave heights with high confidence, the duration of the wave height data should be sufficiently long. Especially in the Northwest Pacific, where typhoons occur irregularly in space and time, the amount of satellite data should be accumulated over a longer period in order to include the effects of typhoons. Discussion Although extreme SWHs from PoT analysis are evaluated reliably in the Northwest Pacific, where typhoons are frequently generated, this method may also have some limitations. Figure 15 shows the 11 µm channel image of COMS/MI from 26 to 27 August 2012, when the typhoon Bolaven transected the Korean Peninsula. In Figure 15, all the altimeter data along the tracks of SWH observations were within 3 h of the COMS/MI observation time. Typhoon Bolaven passed near the IORS at 15 UTC on 27 August 2012, and an enormous SWH with a peak value of 11.1 m was measured by the instrument from the IORS (Figure 3). At nearly the same time, the altimeter observed an SWH of about 12 m while passing through the center of the typhoon (Figure 15g). As may be observed from the time series of the altimeter observations in Figure 15, the altimeters did not obtain data for the entirety of the storm, although all satellite altimeter data from Jason-1/2 and Cryosat-2 were collected. Therefore, it is highly plausible that extreme SWHs may be underestimated in regions in which the altimeter is unable to observe the high SWHs generated by storms. The observation limit of the altimeter is reported to be sensitive to PoT analysis [28]. In order to calculate the extreme wave heights with high confidence, the duration of the wave height data should be sufficiently long. Especially in the Northwest Pacific, where typhoons occur irregularly in space and time, the amount of satellite data should be accumulated over a longer period in order to include the effects of typhoons. Both buoys and altimeters have limitations on the measurements of extreme waves because the buoys measure instantaneous extreme waves in a single point and satellite altimeters measure a mean value in a footprint. Thus, a difference in the resulting measurements will inevitably appear. Despite the differences in these observations, field observations of wave heights are very important and provide us with invaluable verification data. In this regard, observational data from the IORS in the center of the East China Sea can provide very important clues regarding wave height changes as well as other oceanic and atmospheric values when typhoons or extreme events occur. In light of this, if sufficient data are accumulated, the IORS can be one of the most important sites for the validation of extreme SWHs using satellite data and for diverse kinds of oceanic research. Thus, more buoy stations for in situ measurements should be installed and operated in near-real time for the high-performance estimation of 100-year period extreme SWHs. osat-2 were collected. Therefore, it is highly plausible that extreme SWHs may be underestimated in regions in which the altimeter is unable to observe the high SWHs generated by storms. The observation limit of the altimeter is reported to be sensitive to PoT analysis [28]. In order to calculate the extreme wave heights with high confidence, the duration of the wave height data should be sufficiently long. Especially in the Northwest Pacific, where typhoons occur irregularly in space and time, the amount of satellite data should be accumulated over a longer period in order to include the effects of typhoons. The SWH data were obtained from satellite altimeters of (b) Jason-1 (right) and Jason-2 (left), (d) Jason-1 (right) and Jason-2 (left), (e) Cryosat-2, (g) Cryosat-2 (middle), Jason-1 (right) and Jason-2 (left) in the upper right corner, and (h) Jason-1 (right) and Jason-2 (left). Conclusions The 100-year return period of SWHs was estimated by applying the PoT method as one of representative EVA methods. The PoT-derived SWHs were compared with the maximum SWHs within the upper 0.1% of satellite observations in the Northwest Pacific. Despite many shortcomings, including the limitations of the PoT method and the unevenness of observations in satellite altimeter data, the PoT method supported our hypothesis by presenting higher SWHs than the maximum values observed from 1992 to 2016. The comparisons of the data were performed by classifying the extreme SWHs into two different regions-a typhoon region and non-typhoon region-by defining the thresholds of the frequency of the number of typhoons in each bin. As a result, the PoTderived 100-year extreme SWHs revealed the characteristic variations affected by not only typhoons in the East China Sea during typhoon periods in summer and fall but also by winter-time high SWHs in the northeastern part of the study region in the Northwest Pacific. Overall differences of SWHs between PoT-derived extremes and the maximum values appeared at approximately 4 m, and the highest differences were approximately 8 m. The present PoT method represents the SWHs in extreme events well. However, there is a potential limitation in terms of bias due to the few sampling problems when an altimeter observes the SWH along the track in the nadir direction. Nonetheless, satellite altimeter data are very valuable for estimating the extreme value of SWHs as they cover the global ocean as well as regional seas. In addition, if sufficient data are accumulated, it is also expected that the IORS could be a good candidate for investigating oceanic responses to the typhoons that frequently pass over the station. Data Availability Statement: All data used in this study are available from IFREMER (satellite altimeter SWH data, ftp://ftp.ifremer.fr/ifremer/cersat/products/swath/altimeters/waves/ (accessed on 20 January 2021)), KMA (buoy and COMS data, https://nmsc.kma.go.kr/ (accessed on 20 January 2021)), or KHOA (IORS data, http://www.khoa.go.kr/ (accessed on 20 January 2021)). Acknowledgments: In situ data from the Ieodo Ocean Research Station (IORS) was provided by the IORS project of the Korean Hydrographic and Oceanographic Agency, Korea. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The
Tumor purity adjusted beta values improve biological interpretability of high-dimensional DNA methylation data A common issue affecting DNA methylation analysis in tumor tissue is the presence of a substantial amount of non-tumor methylation signal derived from the surrounding microenvironment. Although approaches for quantifying and correcting for the infiltration component have been proposed previously, we believe these have not fully addressed the issue in a comprehensive and universally applicable way. We present a multi-population framework for adjusting DNA methylation beta values on the Illumina 450/850K platform using generic purity estimates to account for non-tumor signal. Our approach also provides an indirect estimate of the aggregate methylation state of the surrounding normal tissue. Using whole exome sequencing derived purity estimates and Illumina 450K methylation array data generated by The Cancer Genome Atlas project (TCGA), we provide a demonstration of this framework in breast cancer illustrating the effect of beta correction on the aggregate methylation beta value distribution, clustering accuracy, and global methylation profiles. Introduction Epigenetic alterations in tumor cells are a hallmark of cancer, and changes in the DNA methylome represent the first, and to date the best characterized, example of a bona-fide epigenetic mechanism for altering gene regulation in cancer. Understanding epigenetic alterations and their effect on tumorigenesis has therefore long been a research focus in cancer and numerous attempts at capturing epigenotypes of cancer have been made over the years [1][2][3][4][5][6]. Carcinogenesis is however a complex process involving both epigenetic and genetic insults to the human genome as well as an intricate interplay between the tumor compartment and the surrounding normal tissue as well as cells of the immune system. Consequently, resected bulk tumor samples are not completely pure with respect to tumor cells, but instead consist of a complex mixture of cell types representative of the tumor and its microenvironment. This mixture of malignant and different non-malignant cell types has profound implications on the readout of, e.g., high dimensional genomic profiling techniques. For instance, for somatic mutations occurring in tumor cells the observed variant allele frequency, i.e., the number of sequence reads with a variant divided by the total number of reads covering a specific locus, becomes a function of the proportions of malignant and non-malignant cells. As different cell types also have different epigenetic states it follows that this admixture of cell types also creates a mixture of epigenetic states in the analyzed tissue, making the discovery and characterization of pure epigenetic subtypes of a given cancer type a challenging task. To circumvent the issue of mixed cell types in bulk tumor specimens and peripheral blood different approaches have been proposed to deconvolve RNA-sequencing [7][8][9] or global DNA methylation data [10][11][12][13][14] to provide estimates of the magnitude and/or nature of the tumor and non-tumor compartments. These methods have often focused on quantifying the immune component of the tumor compartment and relied on flow-sorted gene expression or DNA methylation data generated from purified blood cell types as the basis for deconvolution. Another category of methods has instead focused on quantifying the purity of tumor samples from DNA/RNA-sequencing or DNA methylation data without trying to delineate cell types within the non-tumor compartment. Methods for deriving purity estimates from DNA methylation data have been reported by several groups, e.g. [15][16][17][18]. With respect to the performance of these methods for estimating sample purity, sequencing-based estimates have generally constituted the golden standard measure of comparison in the respective studies. Although most developed and used tools for DNA methylation deconvolution and/or tumor purity analysis have been aimed at quantifying tumor purity or the admixture of infiltrating cell types, few have been aimed at correcting for it globally, but have rather proposed using purity as a covariate in group comparisons and/or clustering [17,[19][20][21][22]. In the case of tools such as those available in the InfinuimPurify-package, which includes a number of approaches directed at controlling for purity in epigenomic analyses, these methods frequently make use of matched reference normal samples for establishing the baseline methylation state of the non-tumor compartment and are currently only implemented on the Illumina Infinuim 450K platform [19]. The requirement of available "normal" baseline data is not always simple to meet and although the Infinium 450K remains the historically most used methylation profiling platform, it has subsequently been replaced by the EPIC 850K array [23]. Accounting for confounding signal derived from the tumor microenvironment (TME) is conceptually straightforward and a theoretical perfect separation is obtainable at CpGs in which the tumor and non-tumor compartment respectively show homogenous and diametrically opposed DNA methylation states. Indeed, this assumption forms the basis for most proposed correction and estimation algorithms and logic dictates that it is only at sites where methylation differs between the tumor and non-tumor compartments that deconvolution strategies can separate alleles contributed by the respective components of the aggregate sample. For the purpose of estimating overall tumor purity this assumption typically only needs to be met for a small minority of the hundreds of thousands of assayed CpG-sites in order for robust estimates to be made. But in order to correct for the TME influence on tumor methylation estimates, one needs to account for the fact that only a minority of tumor cells alter their methylation states in relation to the normal tissue background. This invalidates a simple assumption of one tumor-and one normal compartment with diametrically opposed methylation states, which, e.g., the InfiniumPurify-function implicitly makes [19]. In this study we apply a strategy based on the usage of an estimated tumor cell content, flexible mixture modelling, and linear regression, to correct high-dimensional DNA methylation data at an individual CpG level. The direct basis for this work stems from the observation that CpGs linked to silencing of the well-known BRCA1 tumor suppressor gene display a two-population CpG methylation pattern in triple negative breast cancer (TNBC, tumors that are negative for estrogen and progesterone receptor expression and lack amplification of the human epidermal receptor growth factor 2/erythroblastic oncogene B, HER2/ERBB2, gene) highly correlated with tumor cell content estimated from whole genome sequencing [24]. While BRCA1 promoter hypermethylation is frequent in TNBC [24] it is well established that somatic hypermethylation underlying, e.g., the RB1 gene hypermethylation in retinoblastoma [25] or the CIMP-phenotypes of colorectal cancer [1] or glioblastoma multiforme [2] only affects a subset of all tumors of a given cancer type. Thus, when a normal-tumor DNA methylation difference exists, only a minority of tumors display a methylation state that differs from that of the non-tumor compartment. Here, we believe that accounting and correcting for this fact using modelling on an individual CpG level and allowing for the presence of multiple tumor methylation states, results in overall more binary methylation profiles. Such binary profiles may enhance clustering performance of methylation data and characterization of novel tumor suppressor gene loci. As a result of the investigations in global DNA methylation data from breast cancer in this study we also propose that the reliance on normal samples for deconvolution may be overstated as reliable estimates of normal microenvironment methylation states can be obtained from bulk methylation data. We present a proof-of-concept analysis in the TCGA breast cancer (BRCA) cohort, showing that our approach improves the biologically relevant Basal vs non-Basal contrast, results in more binary and thus cleaner methylation profiles in clustering analyses and increases the overall biologically relevant contrast in global and focused analyses. We expect that our observations and approach can be developed further and/or integrated into pre-existing software solutions to improve the capacity of these for denoising bulk methylation data and allow for more unbiased epigenomic analyses that may yield novel insights into epigenetic phenotypes of cancer and new leads in the search for tumor suppressor genes frequently inactivated by DNA methylation. Data sets We obtained BRCA1 gene associated CpG methylation data based on Illumina EPIC arrays from Glodzik et al. for 235 TNBC cases together with corresponding BRCA1 gene pyrosequencing data and tumor purity estimates from WGS [24,26]. Using the GDC data portal (https:// portal.gdc.cancer.gov), we obtained data manifests covering The Cancer Genome Atlas (TCGA) consortium breast cancer (BRCA) cohort with data available on the levels of RNAsequencing, whole exome sequencing (WES), 450K methylation, as well as copy-number and LOH status. For raw Illumina 450K idat files, we relied on the GDC legacy portal. Data were downloaded using the Genomic Data Commons (GDC) data transfer tool. In a first step platform overlaps were used to filter samples with incomplete data on all levels. In addition, samples were filtered against a previously published data blacklist and were required to have available purity estimates [27], and pass internal quality checks with respect to 450K array normalization. Additionally, PAM50 [28] molecular subtype calls for each sample were obtained from Thorsson et al. [29]. The final cohort consisted of 630 female breast cancer samples. Additionally, custom CpG annotations were derived for the Illumina 450K platform by mapping CpG coordinates to TCGA RNA-seq gene models, CpG density measures as defined by Saxonov et al [30] and Weber et al. [31], TCGA BRCA ATAC-seq peak overlaps [32], repeat-Masker-overlaps [33], ENCODE candidate cis-regulatory elements [34], and ENCODE ChIPseq peak overlaps for 340 transcription factors in 129 cell lines [35]. The complete TCGA data generating workflow is available on github under the following link (https://github.com/ StaafLab/tcgaBrca). DNA methylation data preprocessing Raw idat files were processed using the function preprocessFunnorm [36] as implemented in the R-package minfi [37]. Default parameters were used with patient gender estimated using the built-in function getSex in minfi. Infinium 450K probes annotated as poor performing by Zhou et al. [38] were filtered out, leaving 421 368 unique probes (freeze 2018-09-09). Additionally, a platform-related effect on CpG methylation beta values between the two utilized probe chemistries was adjusted for using the method of Holm et al. [4]. As a normal reference data set, we used GSE67919 [39] downloaded from the Gene Expression Omnibus (GEO) and corrected for the Infinium I/II effect in analyses related to our correction method. For use with the "InfiniumPurify" method (below) we did not perform Infinium I/II correction on normal samples as this step influenced the performance of the method negatively. A strategy to model and correct CpG methylation with respect to tumor purity Based on the observation that DNA methylation and tumor purity often interact to produce mixed methylation states that can be modelled by one or more linear functions we devised an algorithm in the R programming environment [40] to correct for this effect that produces purified tumor-as well as inferred "normal" methylation profiles for a cohort of samples to be corrected. The first objective of the algorithm is to on a CpG basis identify the presence of up to three natural populations of samples that arise as a product of the admixture of tumor and non-tumor cells, and then use the discovered populations in a second step to derive estimates of the purified tumor and normal background methylation states. Briefly, the algorithm uses the DNA methylation of a single CpG at a time for a given set of samples as well as a matched global purity estimate variable for each sample as input. A small amount of gaussian noise N (0,0.005) is added to each sample's methylation estimate to counteract the effect of zero standard deviation (SD) populations on modeling. The function "stepFlexmix" in the FlexMix package for model-based clustering [41] is then applied with methylation as the dependent and 1-purity as the independent variables and with parameters K = 1-3 and nrep = 3. A best fit model is chosen using the function "getModel" with Bayesian Information Criteria (BIC) as optimization criterion and the clusters (N = 1-3) are obtained using the function "clusters". Briefly, in this step FlexMix uses the expectation-maximization (EM) algorithm to iteratively find the optimum assignments of observations (samples) into different number of populations (classes) given the parameters and optimization criteria provided. Next, a linear regression model is fitted to each identified population using the original methylation estimate as the dependent and 1-purity as the independent variable to obtain a linear fit with the intercept serving as the pure tumor methylation state. To obtain the normal background methylation state for each population the same regression is performed with purity as the independent variable. The final methylation state is in each case formed by adding the residuals of each fit to the obtained intercept(s). In a final step, values <0 and >1 are set to zero and one, respectively. The individually adjusted CpG methylation estimates for each sample are then aggregated to reform a data matrix of the same dimensions as the input beta matrix (CpGs as rows, samples as columns). For improved run speed the function is implemented using the R-package "parallel" so that calculations for separate CpGs are distributed across a user-specified number of available cores. To produce deterministic results, each individual CpG is also paired with a unique RNG seed number that makes each parallelized operation fully reproducible. Scripts required to perform beta adjustment are available on github under the following link (https:// github.com/StaafLab/adjustBetas). InfiniumPurify 450K methylation analysis The InfiniumPurify R package [19] contains functions for both the estimation of tumor purity (function getPurity) and correction of beta values (function InfiniumPurify). The correction function requires a tumor set as well as a normal data set (both N>20) in combination with a purity estimate for each tumor sample. For all runs we used the 630 TCGA BRCA samples processed as described above. Normal methylation profiles were obtained from GSE67919 and were not corrected for Infinium I/II effects when used with InfiniumPurify. The DNA sequencing-based purity variable was obtained from [27]. Modeling DNA methylation as a function of tumor purity We previously analyzed genome-wide DNA methylation patterns in the context of BRCA1mutated vs hypermethylated triple negative breast cancer (TNBC) [24]. As part of this study, we noted anecdotally that CpG loci display different methylation characteristics with respect to tumor purity. An example of one of these patterns is illustrated by the methylation state of a CpG in the BRCA1-promoter region in relation to tumor fraction estimates derived from WGS ( Fig 1A). The observed methylation pattern across the 235 TNBC samples shows the presence of two distinct populations with diametrically opposed methylation states when accounting for tumor purity, of which one (the BRCA1 hypermethylated cases by pyrosequencing, red points, Fig 1A) appear to be linearly correlated with tumor content. Conceptually, if such populations of samples can be identified, standard linear regression may then be used to model the influence on methylation levels by normal cell content, and "correct" somatic methylation for normal cell influence. Interestingly, the two observed populations of tumors (hypermethylated and non-methylated) in Fig 1A appear well described by a combination of linear functions and are well captured using an unsupervised flexible mixture modeling approach that allows for the detection of up to three sample populations ( Fig 1B). Although all three populations in Fig 1B would result in equivalent final estimates on the sample level, population 3 (green) can be regarded as redundant and is the result of the mixture modeling algorithm being overly sensitive to minor differences in the data variance structure. This effect can be mitigated by, e.g., adding a small component of normally distributed noise to the methylation estimate, yielding two rather than three populations given the same input variables ( Fig 1C). By using the intercept terms of the respective linear regression models for the two populations, as well as purity and 1-purity as the independent variable, an estimate of the methylation state of both the "normal background" (Fig 1D and 1E) and pure tumor can be obtained ( Fig 1F and 1G) using the equation of the straight line. By then de-trending the respective population beta estimates with respect to 1-purity, a markedly improved separation can be observed between the two tumor populations (Fig 1G). We similarly note that concordant estimates of the methylation state of the aggregate tumor microenvironment (presumably normal breast tissue) are obtained when modeling methylation as a function of tumor purity (Fig 1E). A multi-population approach for correcting DNA methylation beta values using tumor purity estimates Based on the observation that the tumor compartment can display a methylation pattern in which only a subset of tumors diverges from the presumed somatic methylation state, we set out to define a framework for correcting DNA methylation beta values that allows for the presence of multiple separate methylation states in the tumor compartment, as well as estimation of the aggregate methylation state of the non-tumor compartment without the requirement of a-priori information about the normal tissue methylation state or paired normal data. For this we settled on an unsupervised approach using mixture modeling, in which the FlexMix framework is applied on the level of individual CpGs for automated population discovery of samples PLOS ONE Tumor purity adjustment of DNA methylation data in a cohort with similar underlying methylation states confounded by non-tumor methylation using a "purity" estimate as a regression covariate. Our framework is agnostic to how the tumor purity estimate for a sample is obtained. Thus, estimates may be obtained from analysis of genetic data by WES, WGS or SNP microarrays, epigenetic data (e.g., DNA methylation arrays), or pathology estimates. The framework does however intuitively require a subset of cases in a sample cohort to diverge from the presumed somatic methylation state for the population identification to be robust. Based on empirical assessment, and to limit runtimes, we chose a maximum of three populations to model in each iteration (CpG). The FlexMix output is parsed to obtain population designations for each sample and line fits including slopes and intercept terms are obtained for the N discovered populations. If less than two populations are defined, correction for tumor purity is performed by treating all samples as a single population similar to the method implemented in the Infinium-Purify R-package [19]. To improve runtimes, we implemented our method using a parallel computing approach and each CpG is paired with a unique seed number to ensure reproducibility. Application of the multi-population method to random CpGs enhances the Basal-Luminal distinction in breast cancer DNA methylation data To evaluate the performance of purity-corrected methylation data, we extracted 100 data sets of 500 random CpGs (with beta variance >0) each from the Illumina 450K platform for 630 breast cancers from the TCGA consortium and performed agglomerative hierarchical clustering (Pearson distance, Ward's linkage method) on unadjusted and adjusted data respectively (Fig 2A). We evaluated the performance of beta correction by testing how well the top-level split captured the RNA-sequencing derived PAM50 Basal vs Luminal (non-Basal) tumor assignments. For comparison, we also performed the same clustering using dichotomized beta values (beta >0.3) as described in Hoadley et al. 2018 [27], and beta values adjusted using the "InfiniumPurify" function as implemented in the eponymous R-package [20]. Compared to unadjusted beta values we found increased discrimination of the Basal vs Luminal split in terms of specificity and accuracy using our approach, while beta dichotomization did not improve the discriminatory power neither relative to unadjusted data nor in absolute terms (Fig 2B and 2C). Using InfiniumPurify yielded better results than dichotomization in terms of most assayed metrics but did not outperform our proposed method. Deriving corrected beta values and an aggregate estimate of the methylation state of the non-tumor compartment from bulk breast cancer data To more comprehensively evaluate our approach, we applied our method to the top 5000 most varying CpGs across the Illumina 450K platform for the 630 TCGA breast cancer cases. We first evaluated the beta correction using heatmap visualization of uncorrected beta values ( Fig 3A middle panel), inferred normal beta values (Fig 3A left), and adjusted beta values (Fig 3A right) using sample and row ordering based on agglomerative clustering of corrected beta values and by selecting a five-group split of samples as the unit of comparison. We also evaluated the aggregate beta distribution in uncorrected and corrected data respectively and found that adjusted beta values were shifted towards the extremes of the beta distribution in comparison with unadjusted beta values consistent with the conception of DNA methylation being binary (Fig 3B). To evaluate the performance of our method for inferring the normal methylation state from analysis of bulk tumor specimens we compared the inferred normal state data from the 630 TCGA breast cancer samples against a public data set of 81 PLOS ONE Tumor purity adjustment of DNA methylation data healthy breast tissue samples profiled on the same Illumina methylation platform (GSE67919) [39]. Plotting the mean inferred beta values in our cohort against mean observed beta values in the normal breast samples for all CpGs (5000 most varying) showed a good overall agreement (Pearson's r = 0.93, p<0.001, Fig 3C). We also calculated the correlation coefficients between each individual inferred normal sample and the average methylation state of the same normal breast tissue CpGs and contrasted these with correlations calculated from the corresponding tumor estimates before and after purity adjustment (S1 Fig). This analysis showed a similar performance on the level of individual inferred normal samples (median r = 0.86) as that observed in aggregate. More importantly, it demonstrated a low correlation between normal breast and adjusted tumor methylation states across the top 5000 most varying CpGs in our cohort (median r = -0.04). Moreover, calculating the individual correlation coefficients between unadjusted beta values and mean normal methylation showed a median correlation of 0.4, implying a high overall effect of the TME on native beta values. Examination of the respective five-group sample sets in terms of tumor purity showed that while tumor purity estimates were a dominant feature differentiating subgroups in uncorrected data (Fig 3D), this effect was largely attenuated in the subgroups defined using corrected beta values (Fig 3E). In terms of co-clustering, while a nearly pure PAM50 Basal cluster was observed in unadjusted data, several Basal tumors co-clustered with luminal and PAM50 HER2-enriched tumors (clusters 4 and 5 in uncorrected data, Fig 3F). In corrected data, a larger fraction of samples classified as PAM50 Basal on the level of RNA-sequencing formed one cluster (cluster 5) in corrected data (Fig 3G). For comparison with the InfinimPuirfymethod, we also performed beta adjustment using the eponymous function as implemented in Qin et al. [19]. Overall, the InfiniumPurify method only seemed to produce modest changes in appearance when visualized as heatmaps (S2 Fig). The effects of purity adjustment on X-chromosome methylation in an allfemale cohort The methylation state of the X-chromosome is unique in that it undergoes lyonization (Xi) in females to achieve dose compensation for an extra gene copy in comparison to the male genome [42]. This yields a trimodal expected methylation pattern at high CpG density positions in females with expected peaks at beta values of 0, 0.5, and 1 respectively. Given that our method produces valid estimates of the normal sample methylome, we expect these estimates to mirror those seen in bona fide normal samples and adhere to the beta distribution expected from biological theory. We therefore extracted CpGs mapping to X-chromosome promoters (N = 2921) from both the inferred normal methylomes and GSE67919. For both data sources we plotted the median beta distribution and calculated the fractions falling into the three theoretical X-methylation categories (hypo, Xi, and hyper, Fig 4A). With respect to the three methylation bins the concordance was 90.9%, and with a majority of X promoter CpGs being in a hemimethylated (Xi) state. A minority of CpGs in both data sources were hypomethylated and a subset of these are expected to represent promoters of genes that escape Xi. As a further test of biological coherence, we tested whether the hypomethylated promoters resided in promoters of genes that escape Xi. For this we cross-referenced the gene symbols annotations of the CpGs with a list of 75 genes that were found to escape Xi by Katisr and Linal [43]. As expected, the hypomethylated bin was enriched for localization in promoters of genes escaping Xi (4.6% of hypo CpGs vs 0.8% and 0.6% for Xi and hyper CpGs, respectively). Similarly, plotting the beta distribution of all CpGs mapping to high CpG density positions (N = 6736) in adjusted and unadjusted tumor samples showed that purity-adjusted beta values adhere better to the theoretical expectation (Fig 4B). The effects of purity adjustment on between-sample contrast at biologically significant classes of CpGs To establish further biological validity for our beta correction approach, we analyzed pre-and post-adjustment beta values for the top 5000 most varying CpGs from the perspective of native genomic context, and with respect to the 5-group hierarchical clusters, PAM50 subtypes, and TNBC vs non-TNBC cancers. It has long been known that regions of the genome marked by the Polycomb repressive complex (PRC) in embryonic stem cells are frequently methylated in cancer [44]. Using ENCODE cell-line ChIP-seq data we selected CpGs with the capacity to be marked by the combination of EZH2 and SUZ12 (N = 881) among the 5000 top varying CpGs ( Fig 5A). As expected, these CpGs were situated in a CpG-island (CGI) and promoter context and often overlapped TCGA BRCA ATAC-seq peaks. The Basal/TNBC-enriched cluster 5 displayed the lowest aggregate methylation of PRC-bound CpGs while cluster 3 with the highest proportion of Luminal B tumors showed the highest (Fig 5B). The effect of beta adjustment was least prominent in Basal/TNBC-as compared to Luminal tumors as the Basal cancers display the same aggregate methylation state at PRC-marked loci as normal breast tissue (Figs 3A and 5A). Luminal B and HER2-enriched tumors on the other hand more frequently display hypermethylation of PRC-marked loci which is diluted by the presence of non-tumor cells. Other prominent transcription factors in BRCA include the pioneer factor FOXA1 as well as GATA3 that cooperate with and modulate ESR1 activity in luminal tumors [45]. To gauge the methylation state of this class of regulatory CpGs, we mapped ENOCDE FOXA1 and GATA3 bound sites with overlapping ATAC-seq peaks in TCGA BRCA data (S3 Fig). A small number of sites (N = 52) among the top 5000 CpGs were annotated as having overlaps with all three features. These sites mapped to distal (>5kb from nearest promoter) and ocean designated parts of the genome, consistent with these sites acting as distal enhancers. This class of CpGs were densely methylated in Basal/TNBC tumors as well as inferred normal breast tissue (Figs 5A and 3A respectively) and showed near-universal demethylation in non-Basal tumors. Again, the contrast between pre-and post-correction betas showed considerably improved separation of, e.g., Basal vs Luminal tumors, and highlights that non-tumor tissue again acts to lessen this contrast in unadjusted data. Finally, we chose to focus on distal CpGs without overlapping ENCODE transcription factor binding sites (TFBS) (N = 476) as representative of non-regulatory DNA methylation ( Fig 5D). The CpGs were situated in a low CpG density context and did not overlap TCGA ATAC peaks. The methylation state of these CpGs showed a lower association with PAM50 subtypes or a TNBC designation, but instead had more of a gradient-like methylation profile ranging from very low aggregate levels in cluster 4 (luminal A/B) tumors to generally high in cluster 3 (luminal A/B) and 5 (Basal/TNBC) tumors. Generally, demethylation of low-density CpGs has been attributed to lack-of-maintenance or proliferation-related processes, however this cannot likely be the cause in this instance as both HER2-enriched and Basal-like tumors displayed relatively higher methylation levels. In general, Luminal B tumors tended towards lowest overall methylation levels at "non-functional" CpGs, although this was in no way defining for the subtype overall. The effects of purity adjustment on X-chromosome methylation. A) Density plot highlighting the beta distribution of inferred non-tumor samples (red) and normal breast tissue (blue) in X-chromosome promoters for samples in the TCGA BRCA cohort. Partitioning of promoters based on average methylation state into hypo-, hyper-, and Xi showed high concordance in the respective data sets. Hypomethylated promoters were more frequently found to overlap previously published genes that undergo X-escape. B) Purity-adjustment of tumor sample methylation beta values result in a distribution that more closely adheres to the distribution expected from the process of Lyonization. https://doi.org/10.1371/journal.pone.0265557.g004 Fig 5. The effects of purity adjustment on between-sample contrast at biologically significant classes of CpGs. A) Clustered heatmap visualization of the 5000 most varying CpGs in the TCGA BRCA data set in purity-adjusted (left) and unadjusted (right) data. Row annotation bars highlight genomic features associated with each CpG site and include; genic context, CpG island/shore/ocean localization, TCGA ATAC-seq peak overlap, ENCODE TFBS overlaps for ESR1, FOXA1, GATA3, EZH2, and SUZ12. Samples are clustered based on purity-adjusted data. B-C) Boxplot visualization (top) of the beta-value distribution before and after purity-adjustment in the five hierarchical clustering subgroups (left), stratified by PAM50 subtypes (middle), stratified by TNBC-status (right). Below are the same data shown as density plots. Panel B) highlights the beta distribution of CGI CpGs with EZH2 and SUZ12 TFBS overlaps. Panel C) highlights the same for TCGA BRCA-specific ATAC-seq peaks with overlapping FOXA1 and GATA3 TFBS. https://doi.org/10.1371/journal.pone.0265557.g005 PLOS ONE Tumor purity adjustment of DNA methylation data Stability of corrected beta-values to perturbations in tumor purity estimates In order to quantify the effect of uncertainty in the purity estimate on beta adjustments, we performed a simulation experiment in which purity estimates were perturbed with increasing amounts of normally distributed noise (N(0,s), s = 0.01 to 0.19 in increments of 0.02). Beta adjustments were made for the top 5000 most varying CpGs by standard deviation and keeping cohort size (N = 630) and all other parameters unchanged. Similarities with unperturbed results were compared for: i) FlexMix population calls on the CpG level, ii) sample-level beta correlation, and iii) correlation of individual inferred normal samples to mean methylation of GSE67919 normal samples. As expected, increasing amounts of noise in purity estimates negatively affect all our calculated metrics proportionally to the magnitude of added noise (Fig 6A). As a rule, the inferred normal estimates had the fastest decay rate with increasing noise, although this effect was not as pronounced on the level of the cohort average as for individual estimates. Similarly, but to a lower extent, FlexMix group calls on the level of individual CpGs were also affected by noise but overall concordance with calls obtained using unperturbed purity estimates remained consistently high (lowest median concordance~0.9). In terms of sample-level adjusted beta values, a decrease in median correlation and increase in correlation variability was observed but with above 0.95 median correlation in the highest perturbation comparison. Overall, we found that our approach for beta adjustment appeared robust to perturbations of at least moderate magnitudes (mean absolute purity shift in highest noise test 0.15). Additionally, we tested purity adjustment in the 235 TNBC samples with available WGS purity estimates, Infinium 850K methylation data as well as gold standard pyrosequencing calls for BRCA1 promoter methylation [24]. The most varying 850K probe in the BRCA1 promoter showed excellent overall agreement with pyrosequencing-based BRCA1 hypermethylation calls and applying the FlexMix-based adjustment framework to this single CpG produced highly concordant results (Fig 6B). Analogously to the first perturbation test, we evaluated the influence of increasing the noise level in the purity estimate, this time to the BRCA1 promoter CpG only. We performed 50 iterations each of beta adjustment across 26 increasingly confounded purity estimates (noise N(0,s), s = 0.02 to 0.5 in increments of 0.02). Somewhat surprisingly, overall accuracy remained stable across the tested span albeit with a tendency towards deterioration at higher noise levels ( Fig 6C). Interestingly, adding higher levels of pure noise to the purity estimate results in a lower overall magnitude of adjustment, i.e., beta estimates are shifted less on average the more deteriorated the purity estimate is (Fig 6D). With respect to the high maintained accuracy across noise levels, this is a byproduct of only purity estimates being confounded by noise while keeping methylation levels unchanged. The FlexMix framework is therefore still able to reliably separate tumors with differing methylation states, but with increasingly poor performance in finding the correct intercept for the tumor and normal populations as indicated in Fig 6A. Discussion Epigenetic alterations are a hallmark of cancer. Delineating such alterations may help in identification of genes important for tumorigenesis, but also aid in global characterization of epigenetic patterns associated with molecular subgroups of a malignancy. In most epigenetic analyses of bulk tumor tissue to date, the effect of tumor purity on methylation estimates has not been fully addressed despite that it has long been recognized that methylation estimates can be affected by differing epigenetic states present in the mixture of malignant and nonmalignant cell types that make up the bulk tumor. Here, we present a simple strategy to adjust PLOS ONE Tumor purity adjustment of DNA methylation data for this at an individual CpG level in high-dimensional DNA methylation data using linear regression modeling. The concept of adjusting tumor methylation estimates using tumor purity estimates is not new and has been proposed for both bisulfite-sequencing [15] and Illumina array data [15,17,[19][20][21][22]. Most notably among previous work directed at solving this issue for Illumina methylation data, the authors of the InfiniumPurify-package have developed several tools for addressing the issues raised in the current work. The most recent addition to the InfiniumPurify-package implements a mixture modeling approach to define differentially methylated (DM) CpGs between groups [22], while controlling for the influence of purity and betweengroup differences in methylation states. This approach however requires prespecified groupvariables and does not produce "purified" methylation estimates for downstream use. The Infi-niumPurify R-package does include a function for producing purified methylation estimates (function InfiniumPurify), however, this models the tumor compartment as one entity, leading to only minor improvements in analyses presented in our current investigation. In addition, the InfiniumPurify-package is currently only implemented for the 450K-platform. Other available methods have been developed in order to quantify cell type infiltration (e.g., Methylresolver [13] or epiDISH [10]) or perform differential methylation analysis between predefined groups and/or controlling for known confounding cell types (e.g., InfiniumDM [22] or CellDMC [11]). Taken together, we therefore believe there is still an unmet need to further develop flexible and generic algorithms that can address the issue of correcting tumor methylation estimates. To this end, we utilized automated population discovery using the FlexMix framework and linear regression-based adjustment of CpG methylation beta values. We show that our approach yields intuitively interpretable and biologically sound results when applied to a large cohort of breast cancer tumors collected by The Cancer Genome Atlas project. The main advantage of our method is an intuitive and simple approach, a deterministic output, and that the method is applicable cross-platform. We show that using a mixture modelling approach seems to outperform at least simple dichotomization as well as performing regression-based adjustment modelling of the tumor compartment as a single entity. Additionally, our approach can infer the "normal" background methylome with reasonable accuracy, eliminating the need for reference methylomes when a sufficiently large cohort is profiled. We believe that the general approach outlined in this work can be applied to most types of methylation data, potentially with purity variables obtained from many different sources, including the methylation array itself using tools included in packages such as InfiniumPurify [21] or MethylResolver [13]. This general framework can be improved in future work by e.g., refinement of the population discovery process using alternatives to FlexMix and modelling of systematic bias affecting regression intercept terms and residuals. Moreover, considering the continuously growing body of public methylation profiles for different malignancies the concept of static reference sets usable for correcting small sized cohorts could be investigated. As currently devised, the method runs the full 450K array (630 samples x~420 000 CpGs) in around 8 hours on a standard 4-core processor, and runtime scales linearly downward with a larger core/thread count. Notably, this work is not intended to produce an optimal method or perform extensive benchmarking against other methods, neither is it aimed at evaluating the feasibility of using different data sources for deriving input purity estimates. Instead, this work serves to illustrate a way of accomplishing the goal of reducing the impact of the non-tumor compartment on methylation estimates derived from the Illumina 450/850K arrays. Our search for a method capable of adjusting methylation estimates to account for tumor purity started with our observations in the BRCA1 gene locus [24]. Through this work it has become evident that the effect of non-tumor methylation is pervasive in bulk tumor data, and that thousands of loci are affected in any given high-throughput methylation experiment. The RB1 gene represents another bona-fide tumor suppressor gene that has long been known to become inactivated through DNA methylation in cancer [25]. High impact genes such as RB1 and BRCA1 are epigenetically inactivated at high frequencies which is why they were detected in the early days of molecular cancer research. However, epigenetic silencing of key tumor suppressor genes may be infrequent but critical when appearing. An illustrative example is homologous recombination deficiency (HRD) in breast cancer. While promoter hypermethylation of BRCA1 has been reported as the most frequent cause of HRD in TNBC, a substantial number of patients' tumors still lack a known inactivation mechanism (driver alteration) [26]. In these tumors the HRD phenotype is likely conferred by alterations in less penetrant genes or through polygenic interactions, as illustrated by the infrequent (2%) promoter hypermethylation of RAD51C in TNBC that however confers a genetic HRD phenotype similar to BRCA2 inactivation [26]. Improved processing of DNA methylation data can thus be a critical component in the search for novel but infrequently deactivated tumor suppressor genes. Another important issue made more addressable by our method is the question of which and how many pure epigenetic phenotypes exist in e.g., breast cancer, and what are the base epigenotypes on top of which the intrinsic subtypes reside. Epigenetic profiling of pure (or purified) methylomes may therefore provide novel insights into new drivers of HRD or improve our understanding of the base unconfounded tumor epigenome, allowing for improved patient stratification and understanding of the basic tumor biology of breast cancer. Conclusions We present a conceptual method and an algorithm for correcting large-scale DNA methylation data for the influence of the TME on global methylation estimates. Our method uses a flexible and simple mixture modeling approach to identify tumor sample populations differentially influenced by non-tumor background and correct for this effect on a global basis. Our method also generates accurate estimates of the ground-state methylation of the non-tumor compartment and does not rely on available normal samples. We expect that our approach can be developed further and integrated into pre-existing methods to improve the capacity of these for de-noising bulk methylation data and allow for more unbiased epigenomic analyses. Additionally, we believe that in-depth analysis of loci that exhibit a BRCA1-like methylation pattern could yield novel leads in the search for tumor suppressor genes frequently inactivated by DNA methylation and that purified methylomes may provide valuable insights into the question of pure epigenotypes in cancer.
An Efficient Improved Greedy Harris Hawks Optimizer and Its Application to Feature Selection To overcome the lack of flexibility of Harris Hawks Optimization (HHO) in switching between exploration and exploitation, and the low efficiency of its exploitation phase, an efficient improved greedy Harris Hawks Optimizer (IGHHO) is proposed and applied to the feature selection (FS) problem. IGHHO uses a new transformation strategy that enables flexible switching between search and development, enabling it to jump out of local optima. We replace the original HHO exploitation process with improved differential perturbation and a greedy strategy to improve its global search capability. We tested it in experiments against seven algorithms using single-peaked, multi-peaked, hybrid, and composite CEC2017 benchmark functions, and IGHHO outperformed them on optimization problems with different feature functions. We propose new objective functions for the problem of data imbalance in FS and apply IGHHO to it. IGHHO outperformed comparison algorithms in terms of classification accuracy and feature subset length. The results show that IGHHO applies not only to global optimization of different feature functions but also to practical optimization problems. Introduction Gradient-based optimization methods have been widely applied to linear, differentiable, and continuous problems [1]. However, practical problems are increasingly nonlinear, non-differentiable, and discontinuous, rendering gradient-based methods useless. In contrast, the intelligent algorithms developed rapidly in recent years can effectively solve practical problems, although they lack rigorous mathematical derivations. In the twenty-first century, the development of technology has also led to further research in the field of intelligent algorithms. A series of new metaheuristic algorithms have been derived. For example, Kaveh et al. [2] propose an efficient hybrid method based on the Harris Hawk Optimizer and the imperialist competitive algorithm. The Harris Hawk algorithm has an efficient exploitation strategy but performs poorly in the search for optimal solutions, which is compensated by the imperialist competitive algorithm. Song et al. [3] identified the deficiency of the global search capability of the Harris Hawk Optimizer, and they proposed the persistent-trigonometric-differences mechanism to improve the global search capability of the HHO; in addition, they improved the energy factor of the original algorithm to better balance the exploration and exploitation of the algorithm; finally, they applied it to the parameter identification problem of photovoltaic model parameter extraction. Zhong et al. [4] proposed an integrated learning Harris Hawk optimization algorithm with a terminal replacement mechanism. The authors propose to combine the comprehensive learning strategy with HHO to improve the convergence of the optimizer. FS is an important data-preprocessing method in machine learning, but it is NPhard [5], with a search space of 2 n for n features, motivating the use of approximation algorithms to obtain near-optimal solutions as well as metaheuristic algorithms [6]. Inbarani search strategy. The performance of IGHHO was tested on the CEC2017 test set [28], which contains 29 benchmark functions. The ranking method and Wilcoxon rank-sum test results indicate significant improvement. Results on the FS problem show that the algorithm has advantages in practical applications. The major involvement of this study is explained as below: • A novel efficient IGHHO for global optimization and feature selection. • The proposed IGHHO has efficient flexibility to switch between search and development and has strong development capabilities. • The performance of IGHHO was better than with other state-of-the-art optimization techniques. • The IGHHO is applied for feature selection, and we verify the performance of this algorithm in an open dataset. The remainder of this article is organized as follows. Section 2 describes the principle of HHO and its model, and Section 3 discusses the details of IGHHO and its motivation. Section 4 describes experiments to test the function. The method is applied to FS in Section 5. Section 6 relates our conclusions and proposes future research directions. An Overview of HHO HHO is mainly inspired by the cooperative behavior and the way of chasing during the Harris Hawk raid [12]. Exploration At the beginning of a hunt, Harris hawks randomly stay on high ground to find and track prey by eye. HHO models this process as Harris hawks randomly perching through two equal-opportunity strategies, X(t + 1) = X rand (t) − w 1 |X rand (t) − 2w 2 X(t)|, random ≥ 0.5 (X rab (t) − X m (t)) − w 3 (L + w 4 (U − L)), random < 0.5 (1) where X(t + 1) is a Harris hawk's position in the next iteration; X rab (t) is the position of the prey (i.e., the individual with the optimal fitness value); X(t) is a Harris hawk's position in the current iteration; w 1 , w 2 , w 3 , w 4 , and random are random numbers in the range (0,1); U and L are the upper and lower bound, respectively, of the variables indicating the activity of the Harris hawk population; X rand (t) is a randomly selected position in the population; and X m (t) is the individual's average position, calculated from Equation (2), where Number is the number of Harris hawks in the population. Exploration to Exploitation HHO switches from exploration to exploitation according to the change in the escape energy of the rabbit, based on which it chooses exploitation strategies, where Esc is the escape energy of the rabbit, with initial value Esc 0 , which varies randomly within (−1, 1); T max is the maximum iterations; and t is the current iteration number. Exploitation HHO uses the following strategies to simulate the exploitation process. Soft and Hard Encircle When the prey has no chance to escape (i.e., when the random number r (control factors in Section 2.4) is greater than 0.5), the flock selects Equation (4) to round up the prey if the absolute value of its escape energy is greater than or equal to 0.5, and Equation (5) otherwise: where ∆X(t) is the difference between the optimal individual and current individual, which is calculated by Equation (6), and Jump is the distance of random jumps during prey escape, which is calculated by Equation (7), where w 5 is a random number within (0, 1). Soft Encircle with Advanced Fast Dives When the prey is about to escape (random is less than 0.5) and the absolute value of escape energy is greater than 0.5, HHO uses a greedy approach to simulate a Harris hawk flock surrounding the prey through Equation (10). Equations (8) and (9) obtain the alternative positions of the particle, and P and Q are represented by the alternative positions of the particle, respectively. Judging by Equation (10), if f (P) is smaller than f (X(t)), P is selected as the official position of the particle, and if f (Q) is smaller than f (X(t)), Q is selected as the official position of the particle. If both are less than f (X(t)), P is selected as the official position of the particle according to the order of program execution. If neither is smaller than f (X(t)), the particle position is not updated. Hard Encircle with Advanced Fast Dives When the prey is about to escape (random is less than 0.5) and the absolute value of its escape energy is less than or equal to 0.5, HHO uses a greedy approach to simulate the flock pouncing on the prey through Equation (13). 2.4. The Overall of HHO Figure 1 shows the flowchart of the HHO algorithm. HHO relies heavily on the search and development of escape energy E control algorithm. Exploration is performed when |Esc| ≥ 1, and exploitation is performed otherwise. Differential Perturbation Strategy The DE algorithm is known for its good population diversity [29], where each iteration of its particles is perturbed by the weighted difference of two randomly selected particles. This makes the algorithm rich in population diversity. We propose Equation (14) for the differential perturbation of particles, where d is the dimension of the particle, an integer in the range [1,Dim], where Dim is the total dimension of the search space; rand is a uniformly distributed random number in the interval (0, 1); and i is the index number of the current particle. The first part of Equation (14) is the historical best position of the ith particle, and the second part is the weighted difference between its best position in the whole population and its historical best position. When rand is large, the particle convergence speed is high, and when rand is small, the particle will perform local search based on its historical best position. Updating the particles by Equation (14) can direct them toward the region where the global optimal solution is more likely to be found based on their historical optimal positions. This enhances the algorithm's optimal search ability while enriching the population diversity. Our proposed differential perturbation strategy has several advantages: (1) Instead of using randomly selected particles, it makes full use of the 'cognition' of the particles (historical optimum) and the 'social cognition' of the population. (2) The method takes a random number from (0, 2) as the weight, which effectively balances exploration and exploitation to avoid convergence that is too fast and that falls into a local optimum, and exploration does not waste the 'social cognition' of the whole population. Greedy Strategy To fully exploit the properties of inter-particle collaboration and knowledge sharing, we propose a development strategy different from differential perturbation, where E in Equation (15) is the escape energy of the particle; α 1 and α 2 in Equations (16) and (17) are both weight factors, which can be calculated from Equation (20); mean_best i in Equation (17) can be calculated from Equation (21), which serves to extract the k particles with better fitness values than the current particle from the list of individual historical best of the population. It is worth noting that the operations in Equations (15)- (21) are performed for each dimension of a particle. In Equation (15), the escape energy E decreases linearly with the number of iterations, implying that the particle converges increasingly quickly. Equations (16)- (18) are inspired by the GWO, using multiple better particles to guide the direction of the remaining particles in their search for superiority. However, GWO uses the three best particles, which will inevitably lead to premature convergence [30], and there are more parameters and complexity in GWO compared to IGHHO. In general, each particle can provide valuable information, which is called the particle's own "advantage" [31]. For example, particles with good fitness can indicate that the region in which they are located is likely to have the global optimum, while a particle with a poor fitness can indicate that the current region has a low probability of having a global optimum. To improve the global search efficiency, we should avoid being near these regions. To take full advantage of these "advantages", we use the optimal particle in the population and the mean of k particles with better fitness than the current particle as learning objects [32]. This reflects the idea of cooperation and knowledge sharing of evolutionary algorithms to a certain extent. Therefore, our proposed equations have the following characteristics: (1) fewer and simpler parameters compared to GWO; so their impact is slightly reduced; (2) we borrow the greedy strategy of the GWO algorithm, drawing on the optimal particles in the population and the "advantage" of other particles to enhance the algorithm's optimality-seeking stability. Therefore, in this strategy, the particles are updated according to Equation (22), Hybrid Differential Perturbation with Greed IGHHO exploits the region where the particles are located by Equation (14) or Equation (22) under the condition that the particles adapt better than before. While Equation (22) has a stronger exploitation ability, Equation (14) has a stronger search capability. If Formula (22) is used extensively in the early stage to update the particle position, the particle will fall into local optimum prematurely. If Formula (14) is used extensively to update the particle position in the later period, particle optimization will be too slow, resulting in low efficiency. To balance them, we use the fluctuation of the sine function [33] to alternately update particle positions using Equations (14) and (22), as shown in Algorithm 1. where C r is calculated as, Algorithm 1 Updating way of exploitation. 1: Calculate C r according to Equation (23) 2: if rand < C r then 3: update particle position by Equation (14) 4: else 5: update particle position by Equation (22) 6: end if As Figure 2 shows, the C r value is the function of the number of iterations t distribution; as shown in Figure 2, the value of C r fluctuates around 0.5, and the fluctuation range of C r is gradually increasing with iteration number. This balances the number of executions of the two strategies so that the particles do not converge too quickly in the early and strongly converge in the latter. Conversion Exploration and Exploitation The transition between development and search in HHO relies only on the determination of the escape energy factor, which varies linearly during iteration, making the algorithm unable to switch flexibly between development and search. Aydilek [34] proposed a hybrid strategy to combine FA and PSO based on the current particle's superiority or inferiority. The current particle is compared with the previous global optimum and updated using FA if it is better; otherwise, it is updated using PSO. However, doing so would result in a larger proportion of runs for one strategy than for the other and not taking full advantage of the other strategy. Our proposed algorithm performs the development phase if the current particle is better than the previous optimal value; otherwise, it performs the search phase. This allows the particle to take full advantage of previous information and balances development and search. According to the above description, the development and search conversion strategy proposed in this paper is as follows: where f (pbest t−γ i ) is the optimal value that could be achieved in the previous γ iterations of the ith particle, and t is the current number of iterations. Figure 3 shows the flowchart of IGHHO, which has four input parameters: number of particles, maximum number of iterations, search space boundary value, and problem dimension. The IGHHO proposed in this paper differs from the HHO mainly in that the HHO focuses more on search in the early stage guided by escape energy and more on development in the later stage, making the algorithm easy to fall into premature convergence and poor performance; the IGHHO is flexible to switch between the two, switching strategies if the particles do not achieve better performance after γ iterations. In the exploitation stage, the introduction of two exploitation methods makes the algorithm more flexible based on strengthening the exploitation ability and enriching the population diversity. Two methods in the development stage add flexibility by strengthening development ability and enriching population diversity. Computational Complexity Analysis of the Algorithm In general, the time complexity of the metaheuristic algorithm is mainly composed of three parts as follows: 1. The initialization of the population. The time complexity of this part is mainly determined by the population size N and the population dimension D, which generally does not exceed O (N × D). 2. The computation of the fitness of the initial population. The time complexity of this part is mainly determined by the population size N and the target cost generated by the problem, which generally does not exceed O (N × Cost). 3. Main loop. The time complexity of this part is mainly determined by the number of iterations T, the population size N, the population dimension D, and the target cost generated by the problem, which generally does not exceed O ( Moreover, the time complexity of our algorithm also consists of these three main components: 1. Population initialization. The time complexity of this part is comparable to that of other algorithms, O (N × D). 2. Initial population fitness calculation. The time complexity of this part is also comparable to other algorithms, O (N × Cost). 3. In the main loop. As can be seen in Figure 3, the time complexity of this part of the algorithm mainly consists of particle position update and fitness calculation. The particle position is updated by the search strategy and the development strategy alternately. When the algorithm does not satisfy the judgment condition, the left branch is executed; that is, the search strategy is executed according to the original Harris Hawk algorithm. When the algorithm satisfies the judgment condition, the right branch is executed, that is, the development strategy is executed by Equation (14) or Equation (20). which is also comparable to other algorithms. We can conclude from the above analysis that the time complexity of the proposed IGHHO is comparable to other algorithms. The computational complexity of the algorithm consists of three main parts: initializing the population position, updating the population position, and calculating the particle fitness. Our proposed IGHHO algorithm has roughly the same framework as the HHO algorithm, so the computational complexity in these parts is the same. Experimental Design and Parameter Settings To verify the global search capability of the IGHHO, we tested it on the CEC-2017 test set [28], which contains 29 test functions, of which f1-f2 are single-peaked, f3-f9 are multi-peaked, f10-f19 are hybrid, and f20-f29 are composite. All experiments were compiled and run in MATLAB R2020b on a Windows 10 platform using a Core i7-6700HQ CPU 2.60 GHz with 16 GB of RAM. Experimental Results and Analysis Tables 2-4 list the experimental results of IGHHO and comparison algorithms in the same environment. Among them, the data of comparison algorithm are quoted from the simulation results of Zhangze et al. [42]. From Table 2, we can see that IGHHO ranks first on both single-peaked functions, in the top three on both multi-peaked functions, and first on f4 and f7. This shows that IGHHO has good results on the global optimization search problem, and it has a strong ability to jump out of local optima. In Table 3, IGHHO ranks first on most hybrid functions, produces order-of-magnitude differences from second place on f12, f14, and f18, and is not far from first on poorly performing functions f16 and f19. This demonstrates the effectiveness of our strategy. From Table 4, it can be seen that the results of IGHHO rank first on eight of the nine composite functions, and IGHHO is tied with the comparison algorithm in terms of variance. Overall, IGHHO is superior to BMWOA , SCADE, CDLOBA, CLPSO, IWOA, BLPSO, and RCBA on all types of test functions in CEC-2017. This is also evident from the results of the Wilcoxon signed-rank test [43] in Table 5, where n/w/t/l indicate the number of functions on which IGHHO is superior, equal, or inferior, respectively, to the comparison algorithm in n problems. It can be seen that IGHHO is superior to all comparison algorithms. Convergence Analysis In order to better show the optimization performance of IGHHO, this section sets up experiments to analyze the convergence of the proposed algorithms. We compare the proposed algorithm with the recently proposed HHO [12], SSA [6], SMA [44], BOA [45], WOA [11], and ALO [46] for analysis. For fairness, the population size of each algorithm is set to 30, the maximum number of iterations is set to 1000, the particle dimension is set to 30, the search space is [−100, 100], and the evaluation function is cec2017. The experimental results are shown in Figures 4 and 5 shown, and it is obvious that the proposed IGHHO has better convergence in the optimization process. For example, IGHHO ranks in the top 2 among the comparative algorithms in terms of convergence speed of functions f1, f2, f3, f4, f6, f7, f8, f9, f10, f11, f12, f13, f14, f15, f16, f17, f18, f20, f21, f22, f23, f24, f25, f27, f28, and f29. The above indicates that the proposed algorithm convergence is very competitive among the comparative algorithms. Application to FS FS is an integral part to improve classification performance by removing irrelevant and redundant features for fast computation [47]. The wrapper method based on the population intelligence algorithm is widely used due to its simple algorithm and ease of implementation. The method treats the model as a black box [48], evaluates the feature subset using classifiers or other learning models, and continuously improves its quality. Based on this, we apply IGHHO to the FS problem. Model Description We use the feature subset obtained by evaluating the K-Nearest Neighbor (KNN) classifier. Considering the impact of the data imbalance problem on feature selection [49,50], we designed the objective function by weighting the second-order classification error rate and the length of the feature subset, where s f is the length of the selected feature subset; n f is the total number of features in the dataset; µ is a factor to balance the classification error rate with the length of the feature subset, and where n is the number of problem classes, TP k is the number of correctly classified instances in class k, and S k is the number of all instances in class k. To better classify classes with few instances, we use the square of the classification error rate to penalize poorly performing classes. The reason for this consideration is that some classes in the dataset have very few instances, while others have very many instances. For example, for a binary classification problem with 10 instances, problem A has only one instance, while problem B has nine instances. The classifier can easily achieve 90% classification accuracy by simply classifying all instances as problem B, which seems efficient, but the algorithm will perform poorly on real-world problems. To consider only the classification error rate will cause the selected feature subset to contain more redundant features, which will greatly increase the algorithm's computational complexity, especially for high-dimensional problems. Therefore, we consider the size of the feature subset as an objective function so as to minimize the ratio of the number of selected features to that of all features. Experimental Configuration and Parameter Settings Since some data have fewer samples, we used five-fold cross-validation, dividing the dataset into five parts, taking four for training and one for testing. Only the training set was used for FS, and the test set was input to the KNN model to evaluate the FS performance. Experimental Results and Analysis Experimental results comparing IGHHO with other algorithms on the FS problem are presented in Table 7. From the total classification accuracy results, the proposed IGHHO algorithm ranks in the top three on all datasets, ranks first on the datasets Zoo, Wave-form_noise, Lung, Sonar, Isolet, Leukemia, Arcene, and Colon, and even achieves 100% classification accuracy on the dataset Leukemia. In contrast, the EPO algorithm ranked first on only three datasets, and its classification accuracy was only 0.57% better than IGHHO; the SMA algorithm ranked first on only two datasets, Clean1 and Colon, and it is worth noting that on the dataset Clean1, IGHHO achieved only 0.1% less classification accuracy than the first place, while on the dataset Colon, IGHHO tied with SMA for first place; the ALO algorithm ranked first only on dataset CNAE, and again, IGHHO was only 1.57% less than it; HHO, SSA, BOA, and WOA algorithms did not achieve first place on any dataset. In terms of the average ranking, our proposed improved algorithm ranks 1.58 on average, pulling away from the second place of 2.88 and the third place of 3.69. This all indicates that our improved algorithm is dominant relative to the comparison algorithm. On the other hand, Figure 6 gives a box plot analysis of the classification accuracy between IGHHO and the comparison algorithm. We can see that the average classification accuracy of IGHHO is nearly 90%, which is much higher than that of the comparison algorithm. The optimal and lowest classification accuracy are also dominant compared with the comparison algorithm. Combined with the above analysis, the IGHHO algorithm proposed in this paper has some advantages over the traditional optimization algorithm in improving the classification accuracy of feature selection. From the average size of the features selected by IGHHO and the comparison algorithms, as shown in Figures 7-10, it can be observed that IGHHO achieves the shortest feature subset length on six datasets, EPO on five, and SMA on two. However, looking at its data, the IGHHO algorithm achieved a feature subset length nearly 32% shorter than the second place on the dataset Wine, and the final filtered feature subset length was 43% shorter overall compared to other comparative algorithms; nearly 43% shorter than the second place on Zoo and 45% shorter overall; nearly 65% shorter than the second place on Lung and 75% shorter overall; nearly 82% shorter than the second place on Colon and 94% shorter overall; nearly 88% shorter than the second place on Leukemia and 92% shorter overall; nearly 82% shorter than second place on Colon and 94% shorter than overall; and nearly 88% shorter than second place on Leukemia and 92% shorter than overall. For the dataset Waveform_nosie, IGHHO did not achieve outstanding feature subset lengths; for the dataset Sonar, IGHHO achieved about the same results as EPO, while both had a significant advantage over the other comparison algorithms (nearly 45% reduction); for the dataset Hill Valley, IGHHO achieved second place; for the dataset Clean1, IGHHO achieves the second place with an average reduction of 61% compared to the 4th, 5th, 6th, 7th, and 8th places; for the dataset Madelon, IGHHO ranks third and has a big disadvantage compared to the first place, but it still achieves good results compared to the other comparison algorithms (34% reduction in feature subset length). Finally, for Arcene, a dataset with a very large number of features, IGHHO ranks second and has a significant advantage (almost 86% reduction). Overall, the IGHHO algorithm proposed in this paper has some advantages over the comparison algorithm in terms of the length of the selected feature subset. Table 8 compares the average computation time of IGHHO and other algorithms on the FS problem, from which it is clear that SMA is the fastest, and IGHHO shows a slight overall advantage. In terms of the IGHHO algorithm and the HHO algorithm, IGHHO is almost twice as faster as HHO on the medium-dimensional datasets Clean1, Madelon, Isolet, and CNAE and ranks in the top three, which is a competitive advantage over other comparative algorithms; on the other hand, it is about three times faster than the HHO algorithm on the high-dimensional datasets Colon, Leukemia and Arcene and ranks first (second on the dataset Colon), which is a significant advantage over other comparative algorithms. IGHHO's good performance in operational efficiency is not surprising because we introduced a differential perturbation and greedy strategy in the development stage of the algorithm, which gives the algorithm the possibility to explore unknown regions while having high-intensity development capability in the late iteration. This dramatically accelerates the operational efficiency of the algorithm. This advantage is highly prominent when dealing with high-dimensional problems. In summary, the improvement of the HHO algorithm is successful. Discussion The purpose of this study is to propose an efficient search mechanism to solve the feature selection problem for low and high-dimensional datasets. Using a hybrid approach, this study proposes integrating greedy and differential in the development phase of HHO and introducing a dynamic conversion strategy in the conversion mechanism of algorithm development and search to enhance the global search capability of the algorithm while giving it a non-weak local search capability. Through the previous experimental analysis and comparative study, with the help of numerical optimization and feature selection problems, we demonstrate the effectiveness of the proposed method. Our proposed method has the following advantages: • IGHHO can efficiently search for optimization problems of varying difficulty and complexity. The optimization solutions generated by IGHHO have better fitness values compared to various other advanced optimization methods, as shown in Tables 2-4 and 7. • Statistically, the solutions generated by IGHHO are significantly different from those generated by other advanced optimization methods, as shown in Table 5. • Although there is no difference between IGHHO and HHO in terms of computational complexity, IGHHO can produce more efficient solutions than HHO, especially for high-dimensional problems; see Table 8. • To verify the effectiveness of IGHHO for the feature selection problem, the datasets selected for this study vary widely in feature size, from 13 features to 10,000 features, providing an adequate test environment for validating the optimization strategy; see Table 6. • In terms of the length of the filtered feature subsets, IGHHO achieved good results on all datasets, with an overall minimum average reduction of 34% and a maximum of 94% compared to other comparative methods. See Figures 7-10. • In terms of classification accuracy, IGHHO filtered feature subsets helped the learning algorithm KNN produce an average accuracy of 89.42% on all classification datasets, with a maximum accuracy of 100%; see Table 7 and Figure 6. • The design principle of IGHHO is so simple that researchers can easily build on our algorithm with further enhancements. In addition to the advantages, our proposed IGHHO has the following limitations: • IGHHO is derived from HHO, and thus, it is relatively computationally expensive compared to other optimization methods for low-dimensional problems; see Table 8. • IGHHO is a stochastic-based optimization technique, and the subset of features it filters out may vary from run to run, which inevitably confuses users. • In this study, the packing-based KNN algorithm is used as the learning method for feature selection, but the KNN algorithm has unavoidable limitations such as slow running efficiency. Conclusions and Future Directions We proposed an IGHHO. We improved the development phase using differential perturbations and a greedy strategy to enhance population diversity. A new transformation strategy made the algorithm flexible in switching between search and development, which enhanced its global search capability. The performance of IGHHO was verified on the CEC2017 test set with different features. In addition, we proposed a new objective function to address data imbalance in FS and applied IGHHO to the FS problem to verify its effectiveness in practical applications. The obtained results demonstrated the improvement over the HHO algorithm in computational accuracy and search efficiency. The proposed algorithm was seen to be efficient and reliable for practical optimization problems. However, IGHHO has some drawbacks. In the FS problem, although IGHHO generally outperformed comparison algorithms, it did not account for the majority of first rankings. In the next work, we will try to study a more efficient optimization strategy, try to propose a new improve algorithm with better performance, and apply it to other practical problems. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The data that support the findings of this study are available from UCI Machine Learning Repository. Restrictions apply to the availability of these data, which were used under license for this study. Conflicts of Interest: On behalf of all authors, the corresponding author states that there is no conflict of interest.
On the height of Gross–Schoen cycles in genus three We show that there exists a sequence of genus three curves defined over the rationals in which the height of a canonical Gross–Schoen cycle tends to infinity. Introduction Let X be a smooth, projective and geometrically connected curve of genus g ≥ 2 over a field k and let α be a divisor of degree one on X. The Gross-Schoen cycle α associated to α is a modified diagonal cycle in codimension two on the triple product X 3 , studied in detail in [18] and [49]. The cycle α is homologous to zero, and its class in CH 2 (X 3 ) depends only on the class of α in Pic 1 X. Assume that k is a number field or a function field of a curve. Gross and Schoen show in [18] the existence of a Beilinson-Bloch height α , α ∈ R of the cycle α , under the assumption that X has a "good" regular model over k. A good regular model exists after a suitable finite extension of the base field k, and one can unambiguously define a height α , α of the Gross-Schoen cycle for all X over k and all α ∈ Div 1 X by passing to a finite extension of k where X has a good regular model, computing the Beilinson-Bloch height over that extension, and dividing by the degree of the extension. Standard arithmetic conjectures of Hodge Index type [16] predict that one should always have the inequality α , α ≥ 0, and that equality should hold if and only if the class of the cycle α vanishes in CH 2 (X 3 ) Q . Zhang [49] has proved formulae that connect the height α , α of a Gross-Schoen cycle with more traditional invariants of X, namely the stable self-intersection of the relative dualizing sheaf, and the stable Faltings height. Zhang's formulae feature some new interesting local invariants of X, called the ϕ-invariant and the λ-invariant. For α ∈ Div 1 X let x α be the class of the divisor α − K X /(2g − 2) in Pic 0 (X) Q , where K X is a canonical divisor on X. Then a canonical Gross-Schoen cycle on X 3 is a Gross-Schoen cycle α for which the class x α vanishes in Pic 0 (X) Q . A corollary of Zhang's formulae in [49] is that for given X, the height α , α is minimized for α a canonical Gross-Schoen cycle. The question as to the non-negativity of α , α is therefore reduced to the cases where α is canonical. As an example, for X a hyperelliptic curve and α a Weierstrass point on X one has by [18,Proposition 4.8] that α is zero in CH 2 (X 3 ) Q . It follows that the height α , α vanishes, and by Zhang's formulae the height of any Gross-Schoen cycle on a hyperelliptic curve is non-negative. When k is a function field in characteristic zero the inequality α , α ≥ 0 is known to hold by an application of the Hodge Index Theorem [50]. It seems that only very little is known though beyond the hyperelliptic case when k is a function field in positive characteristic, or a number field. Yamaki shows in [46] that α , α ≥ 0 if X is a nonhyperelliptic curve of genus three with semistable reduction over a function field, under the assumption that certain topological graph types do not occur as dual graph of a special fiber of the semistable regular model. The purpose of this paper is to prove the following theorem. Theorem A There exists a sequence of genus three curves over Q in which the height of a canonical Gross-Schoen cycle tends to infinity. To the best of the author's knowledge, Theorem A is the first result to prove unconditionally the existence of a curve X over a number field such that a canonical Gross-Schoen cycle on X 3 has strictly positive height. Our proof of Theorem A is, like Yamaki's work, based on Zhang's formulae. More precisely we use the formula that relates the height of a canonical Gross-Schoen cycle on X 3 with the stable Faltings height of X. We then express the Faltings height of a nonhyperelliptic curve of genus three in terms of the well-known modular form χ 18 of level one and weight 18, defined over Z. Combining both results we arrive at an expression for the height of a canonical Gross-Schoen cycle on a non-hyperelliptic genus three curve X with semistable reduction as a sum of local contributions ranging over all places of k, cf. Theorem 8.2. The local non-archimedean contributions can be bounded from below by some combinatorial data in terms of the dual graphs associated to the stable model of X over k. This part of the argument is heavily inspired by Yamaki's work [47] dealing with the function field case. In fact, the differences with [47] at this point are only rather small: the part of [47] that works only in a global setting, by an application of the Hirzebruch-Riemann-Roch theorem, is replaced here by a more local approach, where the application of Hirzebruch-Riemann-Roch is replaced by an application of Mumford's functorial Riemann-Roch [41]. The modular form χ 18 is not mentioned explicitly in [47] but clearly plays a role in the background. As an intermediate result, we obtain an expression for the local order of vanishing of χ 18 in terms of the Horikawa index [35,44] and the discriminant, cf. Proposition 9.3. This result might be of independent interest. We will then pass to a specific family of non-hyperelliptic genus three curves C n defined over Q, for n ∈ Z >0 and n → ∞, considered by Guàrdia in [19]. In the paper [19], the stable reduction types of the curves C n are determined explicitly. By going through the various cases, we will see that the local non-archimedean contributions to the height of a canonical Gross-Schoen cycle on C n , as identified by Theorem 8.2, are all non-negative. To deal with the archimedean contribution, we observe that the curves C n are all fibers of the family of smooth curves The family D κ is rather special and has been studied in detail by various authors, see for instance Forni [17], Herrlich and Schmithüsen [24], and Möller [38]. As is shown in these references, the family D κ gives rise to a Teichmüller curve in M 3 , and to a Shimura curve in A 3 . Let E κ denote the elliptic curve y 2 = x(x−1)(x−κ). Then for each κ ∈ P 1 \{0, 1, ∞}, the jacobian of D κ is isogenous to the product E κ × E −1 × E −1 , by [19,Proposition 2.3] or [24,Proposition 7]. We show that the archimedean contribution to the height of a canonical Gross-Schoen cycle of C n is bounded from below by a quantity that tends to infinity like log n. In order to do this we recall the work [21] by Hain and Reed on the Ceresa cycle, which allows us to study the archimedean contribution as a function of κ ∈ P 1 \ {0, 1, ∞}. For n → ∞ we have κ → 0. The stable reduction of the family D κ near κ = 0 is known, see for instance [24,Proposition 8] and the asymptotic behavior of the archimedean contribution near κ = 0 can then be determined by invoking an asymptotic result due to Brosnan and Pearlstein [6]. The paper is organized as follows. In Sects. 2 and 3 we recall the non-archimedean and archimedean ϕand λ-invariants from Zhang's paper [49]. The main formulae from [49] relating the height of the Gross-Schoen cycle to the self-intersection of the relative dualizing sheaf and the Faltings height are then stated in Sect. 4. In Sect. 5 we display Zhang's λ-invariant for a couple of polarized metrized graphs that we will encounter in our proof of Theorem A. In Sect. 6 we recall a few general results on analytic and algebraic modular forms, and in Sect. 7 we recall the work of Hain and Reed, and Brosnan and Pearlstein that we shall need on the asymptotics of the archimedean contribution to the height. In Sect. 8 we discuss the modular form χ 18 . The first new results are contained in Sect. 9, where we recall the Horikawa index for stable curves in genus three and show how it can be expressed in terms of the order of vanishing of χ 18 and the discriminant. This leads to a useful lower bound for the order of vanishing of χ 18 . Sections 10-12 contain the proof of Theorem A. Non-archimedean invariants We introduce metrized graphs and their polarizations, and explain how a stable curve over a discrete valuation ring canonically gives rise to a polarized metrized graph (pm-graph). References for this section are for example [11,Sects. 3 and 4], [46,Sect. 1], [48,Appendix] and [49,Sect. 4]. In this paper, a metrized graph is a connected compact metric space such that is either a point or for each p ∈ there exist a positive integer n and ∈ R >0 such that p possesses an open neighborhood U together with an isometry U where S(n, ) is the star-shaped set S(n, ) = {z ∈ C : there exist 0 ≤ t < and k ∈ Z such that z = te 2π ik/n } , endowed with the path metric. If is a metrized graph, not a point, then for each p ∈ the integer n is uniquely determined, and is called the valence of p, notation v(p). We set the valence of the unique point of the point-graph to be zero. Let V 0 ⊂ be the set of points p ∈ with v(p) = 2. Then V 0 is a finite subset of , and we call any finite non-empty set V ⊂ containing V 0 a vertex set of . Let be a metrized graph and let V be a vertex set of . Then \ V has a finite number of connected components, each isometric with an open interval. The closure in of a connected component of \ V is called an edge associated to V . We denote by E the set of edges of resulting from the choice of V . When e ∈ E is obtained by taking the closure in of the connected component e • of \ V we call e • the interior of e. The assignment e → e • is unambiguous, given the choice of V , as we have e • = e \ V . We call e \ e • the set of endpoints of e. For example, assume is a circle, and say V consists of n > 0 points on . Then \ V has n connected components, and has n edges. In general, an edge is homeomorphic to either a circle or a closed interval, and thus has either one endpoint or two endpoints. Let e ∈ E, and assume that e • is isometric with the open interval (0, (e)). Then the positive real number (e) is well-defined and called the weight of e. The total weight δ( ) = e∈E (e) is called the volume of . We note that the volume δ( ) of a metrized graph is independent of the choice of a vertex set V . A divisor on is to be an element of Z V . A divisor on has a natural degree in Z. Assume we have fixed a map q : V → Z. The associated canonical divisor K = K q is by definition the element K ∈ Z V such that for all p ∈ V the equality K (p) = v(p)−2+2 q(p) holds. We call the pair = ( , q) a polarized metrized graph, abbreviated pm-graph, if q is non-negative, and the canonical divisor K q is effective. Let = ( , q) be a pm-graph with vertex set V . We call the integer the genus of . Here b 1 ( ) ∈ Z ≥0 is the first Betti number of . We see that g( ) ∈ Z ≥1 . We occasionally call q(p) the genus of the vertex p ∈ V . An edge e ∈ E is called of type 0 if removal of its interior results into a connected graph. Let h ∈ [1, g/2] be an integer. An edge e ∈ E is called of type h if removal of its interior yields the disjoint union of a pm-graph of genus h and a pm-graph of genus g − h. The total weight of edges of type 0 is denoted by δ 0 ( ), and the total weight of edges of type h is denoted δ h ( ). We have δ( ) = h=0 δ h ( ). We refer to [48] for the definition of the admissible measure μ on associated to the divisor K = K q , and the admissible Green's function g μ : × → R. We will be interested in the following invariants, all introduced by Zhang [48,49]. First of all, we consider the ϕ-invariant, Next we consider the -invariant, Finally we consider the λ-invariant, Let S = {e 1 , . . . , e n } be a subset of E. We define {e 1 } to be the topological space obtained from by contracting the subspace e 1 to a point. Then {e 1 } has a natural structure of metrized graph, and the natural projection → {e 1 } endows {e 1 } with a designated vertex set, and maps each edge e i for i = 2, . . . , n onto an edge of {e 1 } . Continuing by induction we obtain after n steps a metrized graph S with natural projection π : → S and designated vertex set V S . The result is independent of the ordering of the edges in S and is called the metrized graph obtained by contracting the edges in S. Consider the pushforward divisor π * K q on S . It is then clear that π * K q is effective and has the same degree as K q . The associated map q S : V S → Z is non-negative, and thus we obtain a pm-graph S = ( S , q S ) canonically determined by S. Clearly we have g( S ) = g( ). The pm-graph obtained by contracting all edges in E \ S is denoted by S = ( S , q S ). Assume is not a point. When 1 , 2 are subgraphs of such that = 1 ∪ 2 and 1 ∩ 2 consists of one point, we say that is the wedge sum of 1 , 2 , notation = 1 ∨ 2 . By induction one has a well-defined notion of wedge sum 1 ∨ . . . ∨ n of subgraphs 1 , . . . , n of . We say that is irreducible if the following holds: write = 1 ∨ 2 as a wedge sum. Then one of 1 , 2 is a one-point graph. The graph has a unique decomposition = 1 ∨ . . .∨ n as a wedge sum of irreducible subgraphs. We call the i the irreducible components of . Each i can be canonically seen as the contraction of some edges of , and hence has a natural induced structure of pm-graph i of genus g, where g = g( ) is the genus of . We call an invariant κ = κ( ) of pm-graphs of genus g additive if the invariant κ is compatible with decomposition into irreducible components. More precisely, let be a pm-graph of genus g and let = 1 ∨ . . . ∨ n be its decomposition into irreducible components, where each i has its canonical induced structure of pm-graph of genus g. Then we should have κ( ) = κ( 1 ) + · · · + κ( n ). It is readily seen that each of the invariants δ h ( ) where h = 0, . . . , [g/2] is additive on pm-graphs of genus g. By [49,Theorem 4.3.2] the ϕ-invariant, the -invariant and the λ-invariant are all additive on pm-graphs of genus g. Let G = (V, E) be a connected graph (multiple edges and loops are allowed) and let : E → R >0 be a function on the edge set E of G. We then call the pair (G, ) a weighted graph. Let (G, ) be a weighted graph. Then to (G, ) one has naturally associated a metrized graph by glueing together finitely many closed intervals I(e) = [0, (e)], where e runs through E, according to the vertex assignment map of G. Note that the resulting metrized graph comes equipped with a distinguished vertex set V ⊂ . Let R be a discrete valuation ring and write S = Spec R. Let f : X → S be a generically smooth stable curve of genus g ≥ 2 over S. We can canonically attach a weighted graph (G, ) to f in the following manner. Let C denote the geometric special fiber of f . Then the graph G is to be the dual graph of C. Thus the vertex set V of G is the set of irreducible components of C, and the edge set E is the set of nodes of C. The incidence relation of G is determined by sending a node e of C to the set of irreducible components of C that e lies on. Each e ∈ E determines a closed point on X . We let (e) ∈ Z >0 be its so-called thickness on X . Let denote the metrized graph associated to (G, ) with designated vertex set V . We have a canonical map q : V → Z given by associating to v ∈ V the geometric genus of the irreducible component v. The map q is non-negative, and the associated canonical divisor K q is effective. We therefore obtain a canonical pm-graph = ( , q) from f . The genus g( ) is equal to the genus of the generic fiber of f . Let J : S → M g denote the classifying map to the moduli stack of stable curves of genus g determined by f . For h = 0, . . . , [g/2] we have canonical boundary divisors h on M g whose generic points correspond to irreducible stable curves of genus g with one node (in the case h = 0), or to reducible stable curves consisting of two irreducible components of genus h and g − h, joined at one point (in the case h > 0). Let v denote the closed point of S. Then for each h = 0, . . . , [g/2] we have the equality in Z, connecting the combinatorial structure of with the geometry of M g . Archimedean invariants In [49] Zhang introduces archimedean analogues of the ϕ-invariant and λ-invariant from (2.1) and (2.3). Let C be a compact and connected Riemann surface of genus g ≥ 2. Let H 0 (C, ω C ) denote the space of holomorphic differentials on C, equipped with the hermitian inner product We denote the resulting norm on det H 0 (C, ω C ) by · Hdg . Choose an orthonormal basis (η 1 , . . . , η g ) of H 0 (C, ω C ), and put following Arakelov in [2]. Then μ C is a volume form on C. Let Ar be the Laplacian operator on L 2 (C, μ C ), i.e. the endomorphism of L 2 (C, μ C ) determined by setting for f ∈ L 2 (C, μ C ). The differential operator Ar is positive elliptic and hence has a discrete spectrum 0 = λ 0 < λ 1 ≤ λ 2 ≤ . . . of real eigenvalues, where each eigenvalue occurs with finite multiplicity. Moreover, one has an orthonormal basis (φ k ) ∞ k=0 of L 2 (C, μ C ) where φ k is an eigenfunction of Ar with eigenvalue λ k for each k = 0, 1, 2, . . .. The ϕ-invariant ϕ(C) of C is then defined to be the real number (3.2) We note that this invariant was also introduced and studied independently by Kawazumi in [32]. One has ϕ(C) > 0, see [ Note the similarity with (2.3). For fixed g ≥ 2, both ϕ and λ are C ∞ functions on the moduli space of curves M g (C). Some of their properties (for instance Levi form and asymptotic behavior near generic points of the boundary) are found in the references [28][29][30][31][32]. Zhang's formulae for the height of the Gross-Schoen cycle The non-archimedean and archimedean ϕand λ-invariants as introduced in the previous two sections occur in [49] in formulae relating the height of a Gross-Schoen cycle on a curve over a global field with more traditional invariants, namely the self-intersection of the relative dualizing sheaf, and the Faltings height, respectively. The purpose of this section is to recall these formulae. In view of our applications, we will be solely concerned here with the number field case. Let k be a number field and let X be a smooth projective geometrically connected curve of genus g ≥ 2 defined over k. Let α ∈ Div 1 X be a divisor of degree one on X. Following [49, Sect. 1.1] we have an associated Gross-Schoen cycle α in the rational Chow group CH 2 (X 3 ) Q . The cycle α is homologous to zero, and has by [18] a well-defined Beilinson-Bloch height α , α ∈ R. The height α , α vanishes if α is rationally equivalent to zero. Assume now that X has semistable reduction over k. Letω denote the admissible relative dualizing sheaf of X from [48], viewed as an adelic line bundle on X. Let ω,ω ∈ R be its self-intersection as in [48]. Let O k be the ring of integers of k. Denote by M(k) 0 the set of finite places of k, and by M(k) ∞ the set of complex embeddings of k. We set where K X is a canonical divisor on X. Letĥ denote the canonical Néron-Tate height on Pic 0 (X) Q . With these notations Zhang has proved the following identity [49, Theorem 1.3.1]. [49]) Let X be a smooth projective geometrically connected curve of genus g ≥ 2 defined over the number field k. Let α ∈ Div 1 X be a divisor of degree one on X, and assume that X has semistable reduction over k. Then the equality Theorem 4.1 (Zhang We see from Theorem 4.1 that for fixed X, the height α , α attains its minimum precisely when x α is zero in Pic 0 (X) Q . We refer to α where x α is zero as a canonical Gross-Schoen cycle. Also, by Theorem 4.1, the non-negativity of the height of a canonical Gross-Schoen cycle (as predicted by standard arithmetic conjectures of Hodge Index type [16]) is equivalent to the lower bound for the self-intersection of the admissible relative dualizing sheaf. We recall that the strict inequality ω,ω > 0 is equivalent to the Bogomolov conjecture for X, canonically embedded in its jacobian. A conjecture by Zhang [49, Conjecture 4.1.1], proved by Cinkir [11,Theorem 2.9], implies that for v ∈ M(k) 0 one has ϕ(X v ) ≥ 0. As ϕ(X v ) > 0 for v ∈ M(k) ∞ we find that the right hand side of (4.1) is strictly positive. Hence, the non-negativity of the height of a canonical Gross-Schoen cycle implies the Bogomolov conjecture for X. We mention that in [27,Corollary 1.4] it is shown unconditionally that the inequality holds. This inequality is weaker than (4.1) if g ≥ 3 but still implies the Bogomolov conjecture for X. We next discuss the connection with the Faltings height. Let (L, ( · v ) v∈M(k) ∞ ) be a metrized line bundle on S. Its arithmetic degree is given by choosing a non-zero rational section s of L and by setting The arithmetic degree is independent of the choice of section s, by the product formula. As before let f : X → S denote the stable model of X over S. Let ω X /S denote the relative dualizing sheaf on X . We endow the line bundle det f * ω X /S on S with the metrics · Hdg,v at the infinite places determined by the inner product in (3.1). The resulting metrized line bundle is denoted det f * ωX /S . Its arithmetic degree deg det f * ωX /S is the (non-normalized) stable Faltings height of X. Let ω,ω denote the Arakelov self-intersection of the relative dualizing sheaf on X . The Noether formula [14, Theorem 6] [40, Théorème 2.5] then states that Here, for v ∈ M(k) 0 we denote by δ(X v ) the volume of the pm-graph associated to the base change of f : Similarly to ϕ(X v ) and δ(X v ) one also defines (X v ) (for v ∈ M(k) 0 ) and λ(X v ). The Arakelov self-intersection of the relative dualizing sheaf on X and the self-intersection of the admissible relative dualizing sheaf of X are related by the identity The λ-invariants for some pm-graphs The purpose of this section is to display the λ-invariants of a few pm-graphs that we will encounter in the sequel. We refer to the papers [9][10][11] by Cinkir for an extensive study of the ϕand λ-invariants of pm-graphs. The reference [9] focuses in particular on pm-graphs of genus three. Let be a metrized graph. Let r(p, q) denote the effective resistance between points p, q ∈ . Fix a point p ∈ . We then put where dx denotes the (piecewise) Lebesgue measure on . By [8, Lemma 2.16] the number τ ( ) is independent of the choice of p ∈ . It is readily verified that for a circle of length δ( ) we have τ ( ) = 1 12 δ( ), and for a line segment of length δ( ) we have τ ( ) = 1 4 δ( ). The τ -invariant is an additive invariant. Now let = ( , q) be a pm-graph of genus g, with vertex set V , and canonical divisor K . We set The next proposition, due to Cinkir, expresses λ( ) in terms of τ ( ), θ ( ) and the volume δ( ). Proposition 5.1 Let be a pm-graph of genus g. Then the equality Proof See [11,Corollary 4.4]. We will need the following particular cases. Example 5.3 Let be a pm-graph of genus g consisting of two vertices of genera h and g−h joined by one edge of length δ( ). Then Example 5.4 Let be a polarized metrized tree of genus g. Then we have This follows from the additivity of the λ-invariant and Example 5.3. Example 5.5 Let be a pm-graph of genus g consisting of two vertices of genera h and g − h − 1 and joined by two edges of weights m 1 , m 2 . We have Algebraic and analytic modular forms References for this section are [12], [13, Chapter V], [15] and [36]. Let g ≥ 1 be an integer. Let A g be the moduli stack of principally polarized abelian varieties of dimension g, and denote by p : U g → A g the universal abelian variety. Let U g /A g denote the sheaf of relative 1-forms of p. Then we have the Hodge bundle E = p * U g /A g and its determinant L = det p * U g /A g on A g . Kodaira-Spencer deformation theory gives a canonical isomorphism of locally free sheaves on A g , see e.g. [13,Sect. III.9]. For all commutative rings R and all h ∈ Z ≥0 we let denote the R-module of algebraic Siegel modular forms of degree g and weight h. Let H g denote Siegel's upper half space of degree g. We have a natural uniformization map u : H g → A g (C) and hence a universal abelian varietyp : U g → H g over H g . The Hodge bundleẼ =p * U g /H g over H g has a standard trivialization by the frame (dζ 1 /ζ 1 , . . . , dζ g /ζ g ) = (2πi dz 1 , . . . , 2πi dz g ), where ζ i = exp(2πiz i ). In particular, the determinant of the Hodge bundleL = detẼ is trivialized by the frame . Let R g,h denote the usual C-vector space of analytic Siegel modular forms of degree g and weight h. Then the map The Hodge metric · Hdg on the Hodge bundleẼ is the metric induced by the standard symplectic form on the natural variation of Hodge structures underlying the local system R 1p * Z U g on H g . The natural induced metric onL is given by for all ∈ H g . The Hodge metrics · Hdg onẼ resp.L descend to give metrics, that we also denote by · Hdg , on the bundles E resp. L on A g (C). Explicitly, let (A, a) ∈ A g (C) be a complex principally polarized abelian variety of dimension g, then we have the identity Here is any element of H g satisfying u( ) = (A, a). Assume now that g ≥ 2, and denote by M g the moduli stack of smooth proper curves of genus g. The Torelli map t : M g → A g gives rise to the bundles t * E and t * L on M g . Let π : C g → M g denote the universal curve of genus g, and denote by C g /M g its sheaf of relative 1-forms. Then we have locally free sheaves E π = π * C g /M g and L π = det E π on M g , and natural identifications E π ∼ − → t * E and L π ∼ − → t * L. Kodaira-Spencer deformation theory gives a canonical isomorphism of locally free sheaves on M g . Over C, the pullback of the Hodge metric · Hdg to L π coincides with the metric derived from the inner product (3.1) introduced before. Let M g ⊃ M g denote the moduli stack of stable curves of genus g, and consider the universal stable curveπ : C g → M g . Let ω C g /M g be the relative dualizing sheaf ofπ, and put Eπ =π * ω C g /M g and Lπ = det Eπ . Then Eπ resp. Lπ are natural extensions of E π resp. L π over M g . When S is a scheme or analytic space and f : X → S is a stable curve of genus g, we usually denote by E f = f * ω X /S and L f = det f * ω X /S the sheaves on S induced from Eπ and Lπ by the classifying map J : S → M g associated to f . Proof By (6.2) we have log * (dz 1 ∧ . . . ∧ dz g ) Hdg (t) = 1 2 log det Im (t) for all t ∈ D * . By the Nilpotent Orbit Theorem there exists an element c ∈ Z ≥0 such that det Im (t) ∼ −c log |t| as t → 0. We conclude that * (dz 1 ∧ . . .∧ dz g ) extends as a frame of Mumford's canonical extension [42] of L f | D * over D. By [13, p. 225] this canonical extension is equal to L f . We thus obtain the first assertion. Also we obtain the equality ord 0 (s, L f ) = ord 0 (s), which then leads to the asymptotic − log |s| ∼ − ord 0 (s, L f ) log |t| as t → 0. Combining with (6.3) we find the stated asymptotics for − log s Hdg . The element c ∈ Z ≥0 vanishes if the special fiber X 0 is a stable curve of compact type. This proves the last assertion. Asymptotics of the biextension metric In this section we continue the spirit of the asymptotic analysis from Lemma 6.1 by replacing the Hodge metric · Hdg with the biextension metric · B . We recall the necessary ingredients, and finish with a specific asymptotic result due to Brosnan and Pearlstein [6]. General references for this section are [20,21] and [22]. We continue to work in the analytic category. Let g ≥ 2 be an integer. Let H denote the standard local system of rank 2g over M g . Following Hain and Reed in [21] we have a canonical normal function section ν : Let B denote the natural biextension line bundle on J , equipped with its natural biextension metric [22]. By pulling back along the section ν : M g → J we obtain a natural line bundle N = ν * B over M g , equipped with the pullback metric from B. By functoriality we obtain a canonical smooth hermitian line bundle N on the base of any family ρ : C → B of smooth complex curves of genus g. As it turns out, the underlying line bundle of N on M g is isomorphic with L ⊗8g+4 π , where L π = det E π is the determinant of the Hodge bundle as before. An isomorphism N ∼ − → L ⊗8g+4 π is determined up to a constant depending on g, and by transport of structure we obtain a smooth hermitian metric · B on L π , well-defined up to a constant, that we will ignore from now on. Following [21] we define the real-valued function β = log · B · Hdg on M g . By [31, Theorem 1.4] the equality β = (8g + 4)λ holds on M g . Let a ∈ Z >0 and let s be a non-zero rational section of L ⊗(8g+4)a π over M g . Consider then the quantity on M g . We would like to be able to control its asymptotic behavior in smooth families over a punctured disk D * degenerating into a stable curve. Here we discuss a set-up to study this question. Consider a base complex manifold B and a stable curve holds as t → 0. Here as before the notation ∼ means that the difference between left and right hand side remains bounded. With Lemma 6.1 and (7.1) we then find that there exists a rational number c such that as t → 0. One would like to compute c. Hain and Reed have shown the following result [21, Theorem 1]. If X 0 has one node and the total space X is smooth one has that β = (8g + 4)λ ∼ −g log |t| − (4g + 2) log det Im (t) (7.2) if the node is "non-separating", and if the normalization of X 0 consists of two connected components of genera h > 0 and g − h. Referring back to Examples 5.2 and 5.3 we observe that the leading terms in the asymptotics in these cases are controlled by the λ-invariant of the polarized dual graph of the special fiber. We expect this behavior to extend to arbitrary stable curves X → D smooth over D * . More precisely, we should have the following. Let f : X → D be a stable curve of genus g ≥ 2 smooth over D * . Let denote the dual graph of X 0 endowed with its canonical polarization. Recall that if X 0 has r nodes the graph has r designated edges with weights equal to the thicknesses (m 1 , . . . , m r ) of the nodes on the total space X. Let λ( ) be the λ-invariant of . In general one expects that the asymptotic holds as t → 0. However this seems not to be known in general. We can characterize though when this asymptotics holds in terms of the classifying map I : D → B to the universal deformation space of X 0 , see Proposition 7.4 below. We hope that the criterion in Proposition 7.4 will be useful to prove the asymptotic in (7.4) in general. In the present paper, we are able to verify the criterion in a special case. The proof of the following lemma is left to the reader. Define for a pm-graph of genus g ≥ 1 its slope to be the invariant holds as t → 0, where μ( ) is the slope of as in (7.8). Proof Left and right hand side of the stated asymptotics change in the same manner upon changing the rational section s, and hence we may assume without loss of generality that s is the pullback along I of a rational section of L ⊗(8g+4)a ρ , where ρ : C → B is the universal deformation of X 0 . Let m 1 , . . . , m r be the multiplicities at 0 ∈ D of the analytic branches through b 0 ∈ B determined by the locus of singular curves in B. Then one has the asymptotics The required asymptotics follows. We deduce the following criterion to verify whether (7.4) holds. Proposition 7.4 The following assertions are equivalent: (a) one has the asymptotics as t → 0, (c) the height jump for the classifying map I : D → B to the universal deformation space of X 0 and the slope of the pm-graph associated to X are equal. Proof The equivalence of (a) and (b) follows from Lemma 6.1. The equivalence of (b) and (c) follows from Lemma 7.3. Now we have the following two results, that allow us to verify condition (c) in a special case. [6]) Assume that the stable curve X 0 consists of two smooth irreducible components, one of genus h, one of genus g − h − 1, joined at two points. Theorem 7.5 (Brosnan and Pearlstein Then the height jump j for the classifying map I : D → B to the universal deformation space of X 0 is equal to Here m Proof This follows directly from the definition (7.8), and Example 5.5. We observe that the height jump in Theorem 7.5 and the slope in Proposition 7.6 are equal. With Proposition 7.4 we thus obtain the following result. Corollary 7.7 Assume that the stable curve X 0 consists of two smooth irreducible components, one of genus h, one of genus g − h − 1, joined at two points. Then one has the asymptotics and as t → 0. We will use (7.11) with g = 3 and h = 1 for the proof of our main result. The modular form χ 18 From now on we specialize to the case that g = 3. We introduce the modular form χ 18 , following [26]. For more details and properties we refer to [36] and the references therein. On Siegel's upper half space H 3 in degree 3 we have the holomorphic functioñ where θ ε (0, ) denotes the Thetanullwert with characteristic ε, and where the product runs over all 36 even theta characteristics in genus three. We haveχ 18 ∈ R 3,18 , see [26, pp. 850-851] and [36,Sect. 1]. We define the corresponding element in S 3,18 (C). The analytic modular formχ 18 has a Fourier expansion as a power series in the variables q ij = exp(2πi ij ), with coefficients in Z, and by the q-expansion principle, cf. [13, p. 140], the modular form χ 18 is defined over Z, that is, we have a unique element in S 3,18 (Z) whose base change to C is equal to χ 18 . By a slight abuse of notation we also denote this element by χ 18 . By [25,Proposition 3.4] the modular form χ 18 = 2 −28 χ 18 is primitive, i.e. not zero modulo p for all primes p. We recall that one has a natural structure of reduced effective Cartier divisor on the locus H of hyperelliptic curves in M 3 . The following result seems to be well known. Proof Over C this follows from (the proof of) [45,Theorem 1]. Recall that M 3 is smooth over Spec(Z) with geometrically connected fibers. The primitivity of χ 18 then gives the statement over Z. Let S be a scheme. When f : X → S is a stable curve of genus three we can view χ 18 as a rational section of the line bundle L ⊗18 f on S. In particular, let k be a number field with ring of integers O k , and let X be a non-hyperelliptic genus three curve with semistable reduction over k. Let f : X → S = Spec O k denote the stable model of X over k. From Proposition 8.1 we obtain that χ 18 is generically non-vanishing on S, and from (4.2) we obtain the formula 18 Hdg,v for the (non-normalized) stable Faltings height of X. Combining with Corollary 4.2 we deduce the following result. We will take Theorem 8.2 as a starting point in our proof of Theorem A. The Horikawa index Let S = Spec R be the spectrum of a discrete valuation ring R. Let f : X → S be a stable curve with generic fiber smooth and non-hyperelliptic of genus three. Denote by v the closed point of S. As above we view χ 18 as a rational section of the line bundle L ⊗18 f on S. Then χ 18 is generically non-vanishing by Proposition 8.1. The aim of this section is to give a lower bound on the multiplicity ord v (χ 18 ) in terms of the reduction graph of the special fiber. The result is displayed in Corollary 9.5. We start by writing down an expression for the divisor div(χ 18 ) of χ 18 on the moduli stack M 3 . Let S be a scheme and let f : X → S be a stable curve of genus three. Then on S we have the locally free sheaves E f = f * ω X /S and G f = f * ω ⊗2 X /S as well as a natural map Proof The map ν π is generically an isomorphism and this shows that s π is not identically equal to zero. Let = div s π . Let t : M 3 → A 3 be the Torelli map. By [39, Sect. 1.3] the map t is finite. Let R denote the ramification divisor of t. By (6.1) and (6.4) we have canonical isomorphisms and the map t * A 3 /Z → M 3 /Z obtained by concatenating these isomorphisms with ν π : Sym 2 E π → G π coincides with the canonical structure map. The cokernel of the latter map is M 3 /A 3 ∼ = O R and the cokernel of ν π is O . We find that = R. By [39, Remark 1.1] the map t is ramified precisely along the hyperelliptic locus. As t has generic degree two we find R = H. Combining we obtain = H. Let again S = Spec R be the spectrum of a discrete valuation ring R. Let f : X → S be a stable curve with generic fiber smooth and non-hyperelliptic of genus three. The morphism ν : Sym 2 E f → G f is surjective at the generic point, hence is globally injective. Let Q f denote the cokernel of ν. Then Q f is a finite length O S -module, and we have an exact sequence of coherent sheaves on S with canonical maps, Let v denote the closed point of S. Following Reid [44], Konno [35] and Yamaki [46,47] we call the integer length O S Q f the Horikawa index of f at v, notation Ind v (f ). Let v denote the metrized graph associated to the stable curve f . holds. In particular, χ 18 is a global section of L ⊗18 f . Proof The Knudsen-Mumford determinant construction [34] associates to each coherent sheaf F on S a functorial invertible sheaf det F on S, by using locally free resolutions. From the locally free resolution (9.1) of Q f we obtain a canonical isomorphism invertible sheaves, and we find that s = det ν can be viewed as a canonical non-zero global section of det Q f . By Proposition 9.2 its divisor K satisfies the relation div(χ 18 ) = 2 K + 2 in Div(S). This gives the identity ord v (χ 18 ) = 2 ord v (s) + 2 δ( v ). We are thus left to prove that Ind v (f ) = ord v (s). By the structure theorem for finitely generated R-modules we can find effective Cartier divisors K i on S uniquely determined by Q f together with a decomposition Upon noting that det Q f = O(K ) canonically we obtain the equality K = i K i of effective Cartier divisors on S. This implies which is what we needed to show. Let = ( , q) be a pm-graph of genus three. From [46,47] we recall the notion of a pair of edges of h-type on , and the definition of an invariant h( ). We call a vertex v ∈ V ( ) eliminable if v has valence two and satisfies q(v) = 0. We assume that has no eliminable vertices. We continue to work with the spectrum S = Spec R of a discrete valuation ring R and a stable curve f : X → S of genus three, whose generic fiber is smooth and nonhyperelliptic. Let v denote the pm-graph associated to f . Note that v has no eliminable vertices. Let h( v ) be its h-invariant as above. Let e, e be nodes of X v . It is easy to see that the corresponding pair {e, e } of edges in v is a pair of edges of h-type if and only if both e, e are of type 0, and the partial normalization of X v at {e, e } has exactly two connected components, both of genus one. Combining Propositions 9.3 and 9.4 we find Corollary 9.5 The inequality holds. We saw in Sect. 8 that one has a natural structure of reduced effective Cartier divisor on the hyperelliptic locus H in M 3 . Let H be the closure of H in M 3 . Then as M 3 is smooth over Spec Z (see [33,Theorem 2.7]), one has a natural structure of reduced effective Cartier divisor on H. Not surprisingly, the Horikawa index at v can be directly expressed in terms of the multiplicity of H at v. We do not need the next result, but we would like to mention it for completeness. Proposition 9.6 Let H denote the closure of the hyperelliptic locus in M 3 as above. Then the identity Proof We only give a sketch of the proof. In view of Proposition 9.3 it suffices to prove the following statement. View χ 18 as a rational section of the line bundle L ⊗18 π on M 3 . Then the divisor div(χ 18 ) of χ 18 In particular, if the special fiber X v has a pair of nodes of h-type, then X v is hyperelliptic. Proof of Theorem A We can now combine all previous results in order to prove Theorem A. Following J. Guàrdia in [19] we consider the non-hyperelliptic genus three curves over Q given by the affine equation where n ∈ Z, n = 0, 1. Our aim is to prove the following result, which directly implies Theorem A. Theorem 10.1 Consider the sequence of curves C n with n ∈ Z >0 , n ≡ 2 (mod 3) and n ≡ 0, 1 (mod 2 5 ). Then the height of a canonical Gross-Schoen cycle on C 3 n tends to infinity as n → ∞. Put k n = k 0 (α, α n ). By [19,Sect. 3], for n as in Theorem 10.1 the curve C n acquires semistable reduction over k n . Let f : X n → Spec O n denote the stable model of C n over the ring of integers O n of k n . View χ 18 In Sect. 11 we will show the following result. holds. In Sect. 12 we will show Proof of Theorem 10.2 Assume that n is as in Theorem 10.1. The reduction types of C n at all v ∈ M(k n ) 0 are given in [19]. At a prime of O n not dividing n(n − 1) the curve C n has good reduction, by [19,Proposition 3.1]. At a prime of O n dividing 2 we have by [19,Theorem 7.4] that the special fiber consists of a smooth genus zero component, with three disjoint elliptic curves attached to it. The dual graph of the special fiber is thus a polarized tree in this case. Finally, at an odd prime of O n dividing n(n − 1) the special fiber is the union of two elliptic curves meeting in two distinct points, by [19,Theorem 5.3]. Hence in this case the polarized dual graph of the special fiber consists of two vertices of genus one, joined by two edges. We now analyze each of these various cases. Let S = Spec R be the spectrum of a discrete valuation ring R. Let f : X → S be a stable curve with generic fiber smooth and non-hyperelliptic of genus three. Let denote the polarized dual graph associated to f . Theorem 10.2 is proved by the following three lemmas. We deduce Theorem 10.4 from the latter result. Take n ∈ C \ {0, 1}. It is readily checked that the curve C n : y 4 = x 4 − (4n − 2)x 2 + 1 is isomorphic over C with the curve where κ = 1/n. Indeed, the four roots of x 4 − (4n − 2)x 2 + 1 are given by ± 2n − 1 ± 2 √ n 2 − n, and for a suitable ordering of these four roots, the associated cross ratio is 1/n. As κ → 0 the curves D κ degenerate into the tacnodal curve y 4 = x 2 (x − 1). By [24, Proposition 8] the stable reduction X → D of the family D κ at κ = 0 has as special fiber the union of two copies of the elliptic curve E −1 given by the equation y 2 = x 3 − x, joined at two points. By Lemma 11.3 the rational number b = 1 18 ord 0 (χ 18 ) − λ( ) associated to the stable family X → D is strictly positive. Take d a positive integer such that the family D κ ∼ = C n has semistable reduction near κ = 0 after a ramified base change of degree d (we can take d = 2 but this is not important). Putting t = d √ κ = d √ 1/n we deduce from Corollary 12.1 that
Impact of the COVID-19 pandemic on stress and emotional reactions in Israel: a mixed-methods study Abstract Background The COVID-19 pandemic has had a profound impact worldwide. This study sought to assess the pandemic's psychological impact on the Israeli public. Methods Using mixed methods we assessed Israeli adults during the COVID-19 outbreak. In the quantitative study, participants (N=1407) completed an online battery of measures assessing psychological variables and perceived threat related to COVID-19. Statistical analyses included tests for between-group differences and Pearson correlations. The qualitative study entailed in-depth, semistructured interviews conducted by telephone (N=38). Results The quantitative findings indicate that about 48% of the public had negative emotional reactions and 20% perceived they were liable to contract the virus. Moreover, a positive correlation was found between these feelings and the degree of perceived threat. Three major themes emerged from the qualitative study: 1) a sense of shock and chaos; 2) gradual adjustment to the new reality; and 3) fears and concerns for self and family members. The study's results revealed the following sources of participants’ emotional responses and sense of threat: health concerns regarding themselves and their loved ones; employment concerns; problems with children and spouses caused by being together at home; and difficulties entailed in working at home. Conclusions The study reveals many of the psychological variables and perceived threats related to COVID-19 in Israel. While social distancing may make people feel safer, it can also increase their feelings of isolation, stress and frustration and cause difficulties in many life situations. The findings point to the necessity of addressing the public's perceived susceptibility and emotional reactions about COVID-19. Introduction The 2019 coronavirus disease caused by the novel coronavirus (SARS-CoV-2) began in the city of Wuhan in China and spread quickly around the world, generating a global health crisis of massive proportions. As a result of this pandemic, people found themselves forced to cope with new emotional challenges and particularly with feelings of stress, uncertainty and fear. COVID-19 poses a real threat to physical and emotional health. 1 Indeed, previous research on viruses shows that pandemic situations exert an emotional impact on people's levels of stress and resilience. 2 People's fears that they themselves or those close to them will become ill or will die may generate psychological effects. Feelings of fear and helplessness to-I. Levkovich and S. Shinan-Altman psychological impact of the outbreak as moderate or severe, 16.5% reported moderate to severe depressive symptoms, 28.8% reported moderate to severe anxiety symptoms and 8.1% reported moderate to severe stress levels. 10 Another study that examined 52 730 participants in Hong Kong found that 35% reported feeling stressed about COVID-19, with women reporting higher levels of stress than men. 1 Israel's population is 9.1 million (median age: 30 y). 11 The first case of COVID-19 in Israel was diagnosed towards the end of February 2020. By the end of the research period, thousands were in isolation at home, 13 930 were diagnosed with the virus and, according to Ministry of Health figures for 21 April 2020, 181 had died from the disease. 12 Like other countries, Israel implemented diverse containment measures, including quarantines and closures. 13 In mid-March, the government decided to close down the education system and to prohibit gatherings of >10 people. Entertainment and other public venues were closed, including restaurants, movie theatres, gyms, shopping centres, places of worship, beaches and parks. People were advised to avoid large gatherings at work and to maintain a distance of at least 2 m between employees. 12 The Israeli Ministry of Health regularly released and updated guidelines and instructions to explain the new daily routine to the general public (e.g. taking precautions such as frequently washing hands with soap and water or alcohol-based hand sanitisers, avoiding close contact with people showing symptoms, refraining from shaking hands and covering the mouth and nose when coughing or sneezing). 12 The ongoing rise in the numbers of suspected and diagnosed cases is also liable to affect the public's estimations of the severity and controllability of the virus. [14][15][16][17] In an Israeli study conducted among 639 participants, gender, sociodemographic status, chronic illness, being in an atrisk group and having a family member who had died of COVID-19 were positively associated with fear of COVID-19, and fear levels were associated with anxiety, stress and depression. 18 Therefore, the current study used quantitative methods to examine the Israeli public's emotional reactions and perceived susceptibility to COVID-19. It then employed qualitative methods to investigate how the Israeli public perceived their experiences with the virus, how they expressed their emotions on this matter and how they perceived their own coping methods. Procedure and participants The quantitative study entailed a cross-sectional online survey conducted among 1407 participants in Israel between 12 and 21 March 2020 (Table 1). To minimise personal contact during the outbreak, the questionnaires were administered online through the Qualtrics online platform (www.qualtrics.com). A link to the electronic survey was distributed via Facebook or WhatsApp. Before completing the survey, participants were asked to read an informed consent form and to indicate that they agreed to participate in the study. Only after giving their consent were they permitted to answer the questionnaire. To be included in the study participants had to be aged ≥18 y and able to speak Hebrew. Exclusion criteria were: being aged <18 y, which is the cutoff age for requiring parental consent; giving responses that conformed to a similar pattern (e.g. choosing the same answer across many consecutive items or for the entire questionnaire); or failing to complete the entire questionnaire (specifically, people who began filling out the questionnaire but did not complete it did not participate in the study). Participants who completed the entire questionnaire but omitted a single demographic item, such as gender or health status, did participate in the study. The qualitative study examined a sample from the Israeli public (N=38; 29 women and 9 men). Participants were contacted via adverts on social media asking them to take part in a research study and a personal telephone interview. Those who were interested sent an email to the researcher, who contacted them and explained the study to them. A telephone interview was set up with those who agreed to participate. Before the interview, the consent form was read aloud to the participants, who then gave their informed consent. The interviews were conducted by telephone due to the closure and the guidelines regarding social distancing. Phase 1: Quantitative study Measures Perceived susceptibility was assessed based on previous studies conducted among the general public (e.g. 19 ) using a oneitem measure to examine participants' perceived likeliness of contracting the virus (e.g. 'In your opinion, how likely is it that you will contract COVID-19?'). 17 Participants answered on a five-point Likert-type scale ranging from 1=not at all likely to 5=very likely. Emotional reactions to COVID-19 were assessed based on previous studies conducted among the general public (e.g. 19 ) using three questions related to worry, fear and stress caused by COVID-19 (e.g. 'To what extent do you worry about COVID-19?'). 17 Participants answered on a five-point Likert-type scale, ranging from 1=not at all to 5=very much. A composite index of the averages of all items was created, with a higher score indicating higher levels of negative emotional reactions toward COVID-19. The internal consistency of the index was excellent (Cronbach's α=0.94). Sociodemographic variables included gender, age, years of education, marital status (married/divorced/widowed/ single/other), number of children, medical problems (yes/no), health status (poor/fair/good), home isolation since the COVID-19 outbreak (yes/no), resources that would make coping with COVID-19 easier (more information regarding COVID-19/professional support/lay support/working from home/other). Statistical analyses The data were analysed using SPSS version 25 (IBM, Armonk, NY, USA). Descriptive statistics were used to describe participants' demographic characteristics and the research variables. Spearman correlations were calculated to assess the associations between the research variables and the Bonferroni correction for multiple comparisons was applied. Phase 2: Qualitative study Methods The study adopted a qualitative-phenomenological approach. 20 This type of approach attempts to obtain an in-depth understanding of the studied phenomenon by entering the world and experiences of the participants. Such a paradigm facilitates examining the voices and experiences of the informants as they choose to express them, thus providing a deeper understanding of the interviewees and arriving at insights that give meaning to multidimensional phenomena. 21 Procedure and instrument The research instrument was a semistructured, in-depth questionnaire. The interviewer encouraged participants to talk about their experiences in their own words. The interviews were conducted based on an interview guide (see Appendix 1) that included significant key areas, yet was flexible enough to allow both the development of a dialogue between interviewer and interviewee and for meaningful self-expression. 22 All interviews were conducted by telephone, audio-recorded and subsequently transcribed. Participants gave their consent to record the interviews. Each interview lasted for 30 min. Data collection and analysis proceeded until theoretical saturation was reached (i.e. additional interviews yielded no new material for analysis). Data analysis The content analysis in this study included the following stages: 1) open coding: the principal investigator first read each interview transcript line by line, jotting down notes to capture and identify initial units of meaning (categories) emerging from the data; 2) the same researcher reviewed the major themes and discussed them with the other researcher; 3) axial coding: upon reading the transcripts a second time, the researchers gradually detected associations between themes and subthemes related to context and content. They compared all completed interviews to consolidate meanings and arrive at a theoretical construct; and 4) integration: the core themes or main categories emerging from the data were reordered conceptually and placed back into context, making it possible to analyse and integrate large amounts of data and to generate abstractions and interpretations. 22 Quantitative phase results This study was a cross-sectional online survey conducted among 1407 participants in Israel. The majority of the respondents were female (80%). Their mean age was 41 (18-97) y and they had an average of about 16.5 (9-30) y of education. Most were married (63%) and had an average of two children. About 85% reported having no health problems and about 80% reported their health status as good. Moreover, participants indicated that working from home was their preferred resource in coping with COVID-19. Table 3 summarises the means, SDs, ranges and Spearman correlations of the study variables. The mean scores for emotional reactions (M=2.72, SD=0.93, range 1-5) and perceived susceptibility were about mid-scale (M=3.25, SD=1.14, range 1-5). A positive association was found between perceived suscepti-bility and emotional reactions to COVID-19 (r=0.30, p<0.001). Moreover, perceived susceptibility exhibited negative associations with age, gender and health status, indicating that those who were older and female, and who perceived that their personal health status was not good, reported higher perceived susceptibility. Likewise, emotional responses exhibited negative associations with age, gender, marital status and health status, so that participants who were older, female and unmarried, and who perceived that their personal health status was not good, reported higher emotional reactions. Table 4 shows the two multiple hierarchical regressions calculated for perceived susceptibility and emotional reactions. Background and health-related variables were entered in the first step, and perceived susceptibility and negative emotional reactions were added in the second step. The results show that age and perceived health status are negatively related to perceived susceptibility, such that perceived susceptibility is higher for women and for participants whose perceived health status is lower. Age, gender and perceived susceptibility are related to emotional reactions, such that emotional reactions are higher among younger participants, women and participants with higher perceived susceptibility. Qualitative phase results The findings of the qualitative study yielded three main themes: 1) a sense of shock and chaos; 2) gradual adjustment to the new reality; and 3) fears and concerns for themselves and their loved ones. Theme 1-'I'm losing control': sense of shock and chaos All 38 participants in the qualitative phase described how their lives had changed overnight. This change was drastic, surprising, International Health powerful and difficult to contain. When the COVID-19 epidemic broke out in China, the participants felt the virus was far away and they were immune from it. As the epidemic got closer, reaching European countries and also Israel, the respondents' sense of shock and serious concern rose. People received their information from diverse media sources. Some of these sources were reliable, such as the Israeli Ministry of Health. Other less reliable sources spoke of conspiracy theories and a feeling that the world was coming to an end. Moreover, the guidelines provided by the Ministry of Health were not always clear to everyone, leading to confusion, information chaos and inner upheaval. Twenty-two participants indicated that their feelings ranged from a sense of indifference to denying the situation and continuing as usual. They then began to internalise the difficult situation as it filtered down to them. They were overcome by a sense of lack of control and helplessness that necessitated changes in their behaviour. I was overcome by tremendous fear due to COVID-19, fear of contracting the disease, fear of the new reality. I felt helpless and it's hard for me to function in this new reality that has been forced upon me …. I thought the situation would pass quickly and I never thought it would reach such great proportions that the entire country would be shut down (26-y-old single woman). Twenty-nine of the participants described being shocked and confused. They searched for a direction to follow and wondered how they should behave and what they should do. The situation generated feelings of frustration and of being stuck. Theme 2-'Recalculating the route': gradual adjustment to the new reality Most of the participants (32) indicated that after their initial feelings of shock, they found themselves gradually adjusting to the new reality that had been forced upon them by COVID-19 and the Ministry of Health guidelines. Adjustment difficulties stemmed from having to remain within their own household, limit their movements and, for some, stop working. The participants described how they internalised these guidelines and made changes to their familiar lifestyle. As the situation developed and the virus began to spread, I saw how everything around me began to close and shut down… I began to think differently, to conduct myself differently, to leave the house less frequently, to pay much more I. Levkovich and S. Shinan-Altman attention to maintaining hygiene than usual, to keep my distance from people, to refrain from touching, even my own family. My life routine has changed. I leave the house only if I need something, groceries or the pharmacy or other urgent necessities (32-y-old married man, one child). The main issue among all 20 research participants who had children focused on how to maintain a daily routine. Before this happened, the children were at school, attended afterschool activities and met friends. Now parents were charged with the difficult task of mediating this new situation for their children while at the same time maintaining a routine without any clear or defined framework. Yet despite all these complexities, I understood that this is a good time and wonderful opportunity for a mother with a family (42-y-old married woman, three children). The parents described how they are adjusting to the new situation by setting up a regular routine that includes chores, arts and crafts, cooking activities, home exercise workouts and cleaning. Most of the children study via distance learning and many of the parents work from home. Most of the parents (16) described being exhausted by these diverse roles and by the physical and emotional burden they entailed. I found that the easiest way for us to cope with the isolation was to set up a regular daily routine. When this first began, we tidied up the house, set up an arts and crafts space on a large table and began designing a lovely family album with all the photos we developed from our last family vacation. This took us three whole days and we enjoyed working together on this project (31-y-old married woman, two children). An important element in adjusting to this new situation is coping together as a couple. Members of a couple who are accustomed to working outside the home and seeing each other only several hours a day found themselves together for many hours. If one partner is not working, they may feel unneeded and useless. Such situations can lead to frustration and conflict between partners and within the family. The family as a unit must adjust to the new reality. The research participants also included young single students, nine of whom were forced by the situation to return to their par-ents' homes. These participants described how difficult this was, citing their feelings of having regressed by returning to live with their parents and siblings, a sense of having been robbed of their freedom. Yet the situation also provided an opportunity to renew their acquaintance and draw closer to their families. Over the last two days the situation began to get worse. The entire family at home all the time is challenging and really difficult. We are all at different ages and stages, and each of us wants to do what we feel like doing. It's especially tough for me because before COVID-19 I was usually not at home. I was studying or at work or out with my girlfriends and suddenly I can't even leave the house. It drives me crazy (23-yold single woman). Theme 3-'It's hard to fall asleep and even harder to get up in the morning': fears and concerns for self and family All the research participants described their concerns about their own health and the health of their loved ones. Ten mentioned that people close to them were under quarantine or had tested positive for COVID-19, increasing their stress and sense of helplessness. They described being overwhelmed by worry due to the uncertainty about when it would end and what would happen to them and their loved ones. There are moments of crisis. Especially at night. Sometimes I feel I'm on the brink of losing hope. I have trouble coping with the ambiguity and uncertainty about how long this crisis will last. There's no definitive time limit, and I wonder how long I'll be able to be strong. How will things be when this is all over? What about the children? How long can they hang on? What about my parents? When I finish my daily routine I am flooded with worries and have trouble falling asleep (43-y-old divorced woman, four children). Thirty-three of the participants expressed concerns about older family members or those at risk due to pre-existing medical conditions. They worried about their parents and grandparents. These worries led to a sense of uncertainty about the future, anxiety and lack of control over the situation. My grandmother is in a high-risk group. She must not leave the house, and the grandchildren, including me, cannot visit her. At first I found it very difficult to discuss the situation with her. She continued going to the grocery store and following her regular routine. When all her activities closed down she understood that the situation is really problematic and dangerous for her. I try to speak to her as frequently as possible by phone and video calls with the other grandchildren. The situation is very worrisome and makes me sad, both the physical risk of the virus and the emotional toll, because she now feels more lonely than ever before. Not only will she feel alone, she'll also feel she's getting old, something she has always feared (24-y-old single woman). one or both may be unemployed Fears and concerns for self and family r Concerns for own health and health of those close to them r Many described concerns for older relatives or those with illnesses in high-risk groups r Concerns affect behaviour: sleep disruptions, restlessness, irritability, difficulty performing tasks r Depression, anxiety, sadness, loneliness alongside anxiety, uncertainty and helplessness Some of the participants described how their concerns and fears affected their behaviour and emotions. They mentioned sleep disturbances, restlessness, irritability and difficulties in performing tasks. They felt depressed, anxious, sad and lonely as well as nervous, uncertain and helpless ( Discussion The objective of the current study was to use a mixed-methods approach to examine psychological responses to the COVID-19 outbreak among the Israeli public. The findings indicate that about 48% of the public had negative emotional reactions and 20% believed they were likely to be infected with the virus. Moreover, a positive association was found between emotional responses and extent of the perceived threat. Emotional reactions were higher among younger participants, women and participants with higher perceived susceptibility. The qualitative study expanded our understanding of the psychological process people underwent. Participants described their sense of shock and chaos at the outbreak of the epidemic, followed by a gradual process of adjustment to the new situation along with fears and concerns for their own welfare and that of their loved ones. The findings point to the presence of some degree of psychological distress among half the respondents, which, if not properly managed, has the potential to progress. The emergence of mental health issues in the wake of life-threatening events has been demonstrated among survivors of the Ebola and SARS outbreaks, who exhibited stress, worry and post-traumatic stress disorder (PTSD) symptoms. 23,24 A large Italian study of 18 147 individuals during the COVID-19 lockdown found PTSD symptoms and depression among 37% of respondents. 25 Examinations of the mental health status of the general population during the COVID-19 pandemic revealed signs of stress, anxiety and depression in Japan, 26 Iran 27 and Germany. 28 Since COVID-19 is a global threat, all the news and media channels provide continuous coverage of the epidemic, and specifically of the Ministry of Health's updated statistics and most recent guidelines. 12 The qualitative study revealed people's prevailing sense of confusion and chaos at the time of the outbreak, their need to readjust to their new situations and their concerns for their families. Moreover, most people locked down in their homes watch the news non-stop and tend to panic about the rising numbers of infections and deaths. 17,29 Indeed, media exposure is another possible explanation for the high levels of stress and emotional response emerging in this study. The study was conducted about 3 mo after the initial COVID-19 outbreak and 1 mo after the crisis hit Israel. The Israeli public had already received health guidelines and information about the virus via the media. One Israeli study found that >80% ascribed the public's concerns over COVID-19 to media coverage of the outbreak. 30 A study on COVID-19 conducted in India found that media exposure increased the public's anxiety level. 31 Evidence indicates that repeated engagement with trauma-related media content for several hours a day shortly after a collective trauma may prolong acute stress. 32 In the specific case of the COVID-19 outbreak, greater exposure to threat increases people's fears regarding the virus. 33 Yet we must also bear in mind that such exposure can become quickly habituated. 34 In the qualitative study, the participants' emotional responses and sense of threat stemmed from their concerns for their own health and that of their loved ones, their worries about employment, their difficulties in staying home with their children and spouses and the problems posed by working at home. These findings are in line with findings of previous studies conducted during the COVID-19 outbreak pointing to a variety of concerns among I. Levkovich and S. Shinan-Altman research participants: health anxiety, personal health, the threat to loved ones, risk control, employment, virus spread and economic and societal consequences. 35,36 Likewise, societal safety measures (e.g. lockdowns) have their use in preventing the spread of infections. Yet when such safety measures are too prolonged or too strict, they can have negative consequences, among them economic disruption and unemployment. 37 Social distancing that includes closing shops and schools and working from home is likely to make people feel safer but also to increase their feelings of isolation, stress and frustration and to cause difficulties in many life situations (in the family, between members of a couple, in the sphere of employment). 38 Another major source of fear regarding COVID-19 was the perceived risk of loved ones being infected. This fear can be mitigated by providing the general public with clear information about the risks and by taking (additional) steps to protect vulnerable groups at risk of infection. 3,4 This study has several limitations. Most of the study participants were female. In addition, the qualitative research questions sought to understand the research participants' feelings, behaviours and means of coping with the coronavirus crisis. Nevertheless, they limited the possibilities for deriving additional authentic information from the participants, such as how they coped with economic issues and changes in spousal relations. Given that the survey was conducted during the fourth week of the virus outbreak in Israel, it portrays an immediate and initial picture of the reactions of the general public to COVID-19. As the virus continues to spread, the behavioural guidelines are constantly being changed in light of the rising morbidity and mortality rates in Israel and worldwide. Therefore, research should continue to explore psychological and emotional responses among the general public over time. The question regarding the resources for coping with COVID-19 was a single choice question rather than giving respondents the possibility of different options. The validity of answers is a general problem of online surveys, which we attempted to address by the differential approach described in the Methods section. The use of a mixed-methods approach limits the ability to generalise our results to a wider population and to make claims about directionality. Therefore, conclusions about directionality or causality in the relationships should be treated with caution.
Civil society, human rights and religious freedom in the People’s Republic of China: analysis of CSOs’ Universal Periodic Review discourse ABSTRACT This article examines religious freedom in the People’s Republic of China (PRC) using critical frame analysis of state and civil society organisations’ (CSOs) policy discourse associated with the United Nations (UN) Universal Periodic Review (UPR). The findings show how indigenous Chinese CSOs’ input to the UPR is limited. Their voice is muted, some merely mirror the rhetoric of the ruling Chinese Communist Party (CCP). In contrast, international CSOs are highly critical of what they see as state failure to uphold religious freedom. The analysis reveals a significant disjuncture between the policy discourse of international CSOs and the CCP. The former’s discourse is framed in terms of: denial of rights, imprisonment, legal failings, (re-)education, torture, and persecution. In the absence of enforcement mechanisms, CCP input to the UPR can be seen as part of a process of legitimation and performativity; allowing the ruling elite to afford primacy to what it dubs ‘a framework of socialism with Chinese characteristics’ at the expense of religious freedoms. Introduction Notwithstanding its status as a fundamental human right, 1 the global rise of extremism and associated threats has made the issue of religious freedom a key international issue. In response, scholarly enquiry has underlined the centrality of regime type to religious freedoms. 2 This study addresses a lacuna by exploring religious freedom issues from a civil society perspective in relation to the People's Republic of China (PRC). The PRC is a propitious research context because of international concerns over religious suppression 3 and, the proscription of quasi-religions, 4 such as Falun Gong (FLG). 5 Such developments have thrown religious freedom issues into sharp relief. Yet, according to the ruling Chinese Communist Party (CCP) elite, external criticism is ill-founded. It asserts, it 'safeguards its citizens' freedom of religious belief, protects normal religious activities, defends the lawful rights and interests of religious communities, and assists them in resolving substantive difficulties'. 6 Now is therefore a timely juncture to examine such matters; thereby addressing a key knowledge gap for, as Fenggang Yang's seminal study notes: so far most of the scholarly attention to religious freedom in China has been on the formal regulations and CCP policies … the least studied area is the actual practice and defense of religious freedom by religious communities and civic organizations in civil society. 7 'Civil society' here is defined as associational activities involving the family, non-governmental organisations, pressure groups, charities, community groups, social movements and campaigning organisations. 8 These may operate within national boundaries. Yet, over recent decades, a burgeoning literature has delineated the emergence of 'global civil society'. This (albeit contested) term denotes how, in various ways and forms, civil society organisations (CSOs) increasingly, operate across national boundaries. As Helmut Anheier and Nuno Themudo note: they are organisationally diverse and 'range from large-scale charities with hundreds of staff to transnational volunteer-run networks with no real expenditures at all … '. 9 In all its forms, civil society is an appropriate locus of enquiry in a number of regards. Foremost, it is the social arena in which associative life is shaped by norms, practices and beliefs attached to faith. This influences social cohesion and citizen trust in government, 10 thereby shaping governmentality, political stability and public administration, 11 as well as economic and social development. 12 It also affects the extent to which international norms are embedded in local practices, 13 and the provision of community support and services. 14 This in turn links to wider themes. Notably, the nature of governance in China and how it is shaped by the interplay of international and domestic forces, 15 as well as the nexus between democratisation and development. 16 To explore CSOs' views on religious freedom in the PRC this study employs critical discourse analysis. It is a methodology that is supported by diverse strands of social theory including the interpretive school of policy analysis 17 and the literature on social constructivism. 18 Both place emphasis on language in order to reveal policy actors' beliefs, values, interpretations and knowledge relevant to addressing a given policy issue. 19 The discourse analysis has two components 'framing' and 'issue salience'. The former derives from Goffman and refers to a 'schemata of interpretation'. 20 Frames 'render events or occurrences meaningful, [they] function to organise experience and guide action, whether individual or collective'. 21 Thus, framing is central to understanding rights and freedoms involving multiple actors working across the public and civil spheres. In turn, 'issue-salience' is a technique borrowed from electoral studies 22 ; it focuses on the level of attention given to topics and frames amongst competing issues and agendas in political discourse. The data source is CSO reports submitted to successive cycles of the UN's Universal Periodic Review (UPR). In short, this is a five-yearly assessment that incorporates CSOs' submissions on the extent to which governments uphold their human rights obligations; specifically, for the present purposes, those in relation to religious freedom. Accordingly, the core research aim is to compare and contrast civil society and Chinese government discourse on religious freedom over two cycles of the UPR. The remainder of the article is structured thus: following an overview of the literature on religious freedom and civil society in China, and a summary of the methodology, analysis of state and civil society UPR discourse is presented. The principal findings and their implications are discussed in the conclusion. Religious freedom and civil society in China A full history of state policy on religion in China is out with the present purposes. Yet as Guo and Zhang's leading account details, 23 the period following the Cultural Revolution has been witness to some key changes. Foremost, state policy on religion needs to be seen in the context of the priority the CCP attaches to political stability and the retention of power: 'promoting the unity of all people (believers and non-believers) and their efforts to build a modern socialist country was the Party's basic task; religious differences were relatively secondary'. 24 In consequence, in 1982 'protecting religious freedom' was incorporated into the fourth edition of the constitution. Today, whilst government authorities recognise five major religions (Buddhism, Taoism, Catholicism, Islam, and Protestantism), to varying degrees, some other religious beliefs are tolerated. Indeed, according to the government: 'there are about 5,500 religious groups … along with nearly a hundred religion-affiliated academic institutions and as many as 140,000 places of religious activity … Religious clergy number some 360,000, and there are around 100 million believers'. 25 Historically, the Republic of China was a signatory to the original UN Charter and the UN Universal Declaration of Human Rights. In modern times the successor PRC is a signatory to the International Covenant on Civil and Political Rights (ICCPR) (1998). Under this the PRC is expected to uphold Article 18 (1): Everyone shall have the right to freedom of thought, conscience and religion. This right shall include freedom to have or to adopt a religion or belief of his [sic] choice, and freedom, either individually or in community with others and in public or private, to manifest his [sic] religion or belief in worship, observance, practice and teaching. However, almost two decades on, the PRC has yet to ratify the ICCPR. Despite this failing, the Chinese authorities have ratified other binding UN treaties that include provisions safeguarding religious freedom. A leading example is the UN Convention on the Rights of the Child: states Parties shall respect the right of the child to freedom of thought, conscience and religion … [and] shall respect the rights and duties of the parents and, when applicable, legal guardians, to provide direction to the child in the exercise of his or her right (Article 14). Another is the UN Convention on the Elimination of all Forms of Racial Discrimination. 26 Article 5(d, vii) is unambiguous and asserts 'the right to freedom of thought, conscience and religion'. However, despite these obligations, as the following analysis affirms, the reality is one of mixed progress. For example, a recent report by the UN concluded that: notwithstanding the assurances provided by the State party delegation, the Committee remains concerned about reports that members of some minority groups do not fully enjoy freedom of religion … Taking into account the intersectionality between ethnicity and religion, the Committee recommends that the State party ensure respect for the right of members of all ethnic groups to freely enjoy freedom of religion. 27 In addition to UN treaties, further safeguards are set out in the constitution. For example, Article 36 provides that 'the State protects normal religious activities' (emphasis added). Yet, as with the UN instruments, there are (non-)compliance issues. For example, as Guobin Zhu cogently observes of Article 36, 'what is regarded and defined as "abnormal" activities? [… it] may be easily subject to subjective and arbitrary interpretation'. 28 This issue of conditionality and interpretation also applies to the National Human Rights Action Plan passed by the State Council in 2012. 29 It re-emphasises the principle of freedom of religious belief ('China upholds the principle of freedom of religious belief stipulated in the Constitution and strictly implements the Regulations on Religious Affairs to guarantee citizens' freedom of religious belief'), as well as the goal of 'protecting normal religious activities according to law'. However, it also includes an opaque clause; namely, that 'the Action Plan was formulated in line with the following basic principles … The principle of pursuing practicality'. The issue of what is deemed 'practical' is not defined. Against this backdrop, contemporary accounts of religious freedom in China indeed paint a mixed picture. In the case of Christianity and other faiths. There are some positive assessments. For example, Changgang Guo and Fengmei Zhang observe 'the CCP's contemporary religious policy is still deeply misunderstood'. 30 Moreover, longitudinal survey data on citizens' subjective evaluation of political changes found that more than half of respondents reported that they believed freedom of religion had improved. 31 In contrast, an opposing literature asserts that when religious (and 'quasi-religious') organisations are felt to pose a threat to CCP power and stability, oppression and rights denial are a reality. As one account puts it, although the CCP has asserted 'Christianity [i]s compatible with [it's] vision of China as a "harmonious society" [it …] has continued to preach atheism and places restrictions on faith'. 32 Extant work also details how other beliefs are subject to oppression, notably Muslim Uyghurs in Xinjiang. 33 In the case of quasi-religions such as Falun Gong, surveillance, security operations, 'interrogation and conversion programs in prisons and labor reform institutions' are widespread. 34 In short, critics argue that the ruling elite has adopted an 'instrumentalist approach of law-making, [one that] tends to restrict the exercise of religious freedom to serve a political agenda'. 35 Overall, this burgeoning literature suggests that the PRC is a context in which 'the process of identification of "evil cult" is rather selective, and mainly based on political considerations'. 36 As one account explains, the 'severe repression' of Falun Gong arose because it: triggered alarm bells within the CCP for two reasons. First, the Party realized that the movement was proving more powerful in undermining the ideology of the state than any organized religion. Secondly, Falun Gong persistently demanded freedom of assembly, which was something the Party could not tolerate. 37 Overall, in the period since 2005, the PRC has seen the relaxation of some aspects of state regulation of religion. Yet, as Ani Sarkissian's account explains, state Regulations on Religious Affairs (RRA): continue to restrict and repress religions in a number of ways: CCP members are still required to declare their atheism … religious bodies are still required to register with government … the government may still refuse registration to any group for any reason without justification … [and] all sites for religious activities must be registered with the government … . 38 In sum, the literature on religious freedom and civil society in China is a polarised one. One strand points to the language of the constitution and CCP pronouncements, arguing that claims of oppression are overstated. The other provides accounts of widespread suppression. What is missing is a systematic analysis of civil society's discourse on the issue juxtaposed with that of the ruling elite. Attention now shifts to how the present study addresses that gap. Methodology This study uses critical discourse analysis with an examination of issue salience and policy 'framing'. The latter can be viewed as 'a necessary property of a textwhere text is broadly conceived to include discourses, patterned behaviour, and systems of meaning, policy logics, constitutional principles, and deep cultural narratives'. 39 In the present analysis, frames in the UPR texts were coded using an inductive coding schemata based on key frames taken from the UNDHR (including: 'rights/freedoms', 'detention', 'education', 'torture/violence', 'persecution/oppression', and 'discrimination'). 40 In addition, the principal frames in the discourse were further analysed to identify tropes. These are crosscutting 'figures of speech and argument that give persuasive power to larger narratives [including frames] of which they are part'. 41 Frame use was quantified by drawing upon the notion of 'issue-salience'. This measures the level of attention to a given topic or frame amongst competing issues and agendas in the discourse. It is determined by content analysis, or the frequency of key words, ideas or meanings in policy documents. 42 This was done by adapting a procedure derived from electoral studies, whereby texts are divided into 'quasi-sentences' (or, 'an argument which is the verbal expression of one political idea or issue'. To operationalize, the mixed methodology electronic versions of the UPR submission documents were analysed in relation to discourse on religious freedoms using appropriate software. 43 To increase reliability the coding was repeated by a research assistant. This revealed a limited number of discrepancies. In total, seven incidences were identified (under 1%) these were resolved through discussion between coders. As noted, the data source was the CSO reports and Chinese government submissions to the first and second cycle UPRs in 2009 and 2013. A total of 112 CSO reports were analysed. The extant literature distinguishes between indigenous, 'grassroots' CSOs and international CSOs ('global civil society'). A recent study offered an assessment of the challenges facing indigenous CSOs in China (here termed NGOsor 'non-governmental organisations'), 'Grassroots NGOs survive only insofar as they refrain from democratic claims-making and address social needs that might fuel grievances against the state … ' 44 Crucially, CSOs concerned with the issue of human rights and religious freedom fall outside the foregoing notion of 'contingent symbiosis', for they are largely concerned with criticality and challenging state practices. In response, the present research design controls for CSO type. The working hypothesis here is that, compared to CSOs based outside mainland China, 'indigenous' CSOs are potentially more constrained and less critical in their submissions for fear of state reprisals. Accordingly, the dataset was divided into two categories: 'indigenous CSOs' (37) and international CSOs (75). Members of the former category were identified by the postal addresses given in the UPR submission (or, in a minority of instances where this was absent, by an internet search of organisational details linked to the CSO name). The second sub-set of international CSOs was a diverse grouping who were headquartered in another jurisdiction. 45 State UPR discourse on the right to freedom of religious belief In its national reports submitted to the UPR in 2009 and 2013, the government of the PRC's discourse on religious freedoms is mainly descriptive in character, offering little more than a basic recitation of legal and constitutional instruments. Problems and challenges are unacknowledged. For example: The Constitution expressly provides that citizens enjoy freedom to believe or not to believe in any religion. No State organ, organization or individual may force citizens to believe or not to believe in any religion, nor may they discriminate against citizens who believe or do not believe in any religion. 46 The discourse also includes basic statistics, (for example, 'the number of Muslims professing the Islamic faith has increased from 18 million in 1997 to 21 million', etc.); as well as examples of state funding for religious organisations (for example, 'beginning in 2009, the funds provided by the Chinese Government to religious communities for the maintenance and repair of temples and other places of worship were increased to 20 million yuan, and again in 2011 to 30 million yuan'). 47 The most important passage is contained in the 2013 UPR submission. It sets out the ruling CCP's position on religious freedoms: The Chinese Government is working to explore paths for human rights development, establishing a robust system of human rights safeguards, and continuously enriching the theory of human rights, all within the framework of socialism with Chinese characteristics … It coordinates and promotes the safeguarding of civil, political, social, and cultural rights as well as the rights of special groups … to ensure that every citizen enjoys a life of ever-greater dignity, freedom and well-being. 48 In conceptual terms, this is significant for it signals cultural relativism (or, the need to adapt the UNDHR to the different cultures applying in different states). This is evident in the telling phrase 'within the framework of socialism with Chinese characteristics'. Thus, the state discourse is unambiguous. It confirms that rights implementation in the PRC is to be qualified. It will be on the CCP's terms. There is also evidence of dissembling in the state reports. For example, all normal religious duties performed by the clergy, such as the normal religious activities carried out in places of worship or believers' homes in accordance with religious custom, are regulated by religious organizations and the believers themselves; these activities are protected by law and may not be interfered with by any person. 49 Here, the state discourse is again qualified by undefined terms. In this case, 'normal' religious activities. Also absent is any reference to the Regulations on Religious Affairs, the requirement for CCP members to declare atheism, religious bodies to register with the government, government's right to refuse registration for any reason, and the requirement that all sites for religious activities must be registered with the government. The state reports also contain annexes for the Special Administrative Regions of Macao and Hong Kong. Strikingly, these contain no specific reference to religious freedoms. Instead, the language is generalised. It alludes to 'minorities' and 'fostering in the community a culture of mutual understanding, tolerance and respect'. 50 On the religious freedoms of ethnic minorities, the state discourse is often directly at odds with extant scholarly analysis, as well as UN reports. For example, in its first-cycle UPR report, the ruling CCP elite stated, 'China safeguards the right of ethnic minorities to use and develop their own spoken and written languages, endeavours to protect their cultures and respects their customs, habits and religious beliefs'. 51 Yet analysis of the situation in Xinjiang reveals that, in addition to Uyghur language tuition being outlawed in universities, it is increasingly uncommon in secondary and primary schools. In turn, as a leading scholarly account notes, such practices are fuelling 'a feeling of victim-hood created by domestic Chinese oppression (and repression), combined with an empathy for similarly "oppressed" peoples in the Middle East'. 52 In the case of Tibet, the Chinese government discourse is also instrumental in nature. It alludes to compliance with the UNDHR. It refers to, fully respecting freedom of religious belief in ethnic regions in Tibet, the manner of succession of the reincarnated Living Buddha is fully respected, and traditional religious activities proceed normally … Tibetan cultural customs and practices continue to be handed down and protected. 53 This contrasts with a raft of academic studies. For example: the Tibetans and the Uyghurs, experience severe limitations when they want to practice their traditional religion. … The intense fear of the Chinese Communist Party (CCP) of a possible link between religion and ethnic separatism has put many restraints on the constitutional guarantees of the right to freedom of religious belief in Tibet and Xinjiang. 54 The state discourse is also at odds with academic analysis of succession issues. Here, as Adrien Frossard cogently observes: 'the most likely succession scenario is one of protracted fight for legitimacy … the possibility of an agreement between the exiles and the Chinese authorities on this matter is ruled out'. 55 Two conceptual strands help explain the Chinese state UPR discourse. The first, instrumentalism, draws on the philosophy of John Dewey, 56 and privileges ideas and language as instruments of action. Their worth is gauged by their usefulness to a given end. It is an approach that emphasises pragmatism, practical purpose and adjustment. 57 Applied to the present case, China's discourse is instrumental in the sense that it seeks to affirm current practice in the PRC as consistent with the countries' obligations under UN rights instruments in order to satisfy the administrative requirements of the UPR; whether or not this is actually the case. In doing this, the discourse presents an example of institutional decoupling. In other words, the situation whereby state elites espouse one thing but do another. This results in contradiction, inconsistency and a gap between policy rhetoric and delivery. 58 Civil society organisations' UPR discourse on right to freedom of religious belief (A). Indigenous CSOs UPR submissions from indigenous CSOs constitute under a third (31%) of the total of 112 reports studied. They were analysed separately in line with the working hypothesis that CCP constraints would undermine and limit their criticality, resulting in contrasts in the framing and issue salience (compared to international CSOs). Recent insightful work by Taco Brandsen and Ruth Simsanon describes the prevailing context: non-profit organizations [CSOs] also face criticism as they operate in closer relationships with the state than would be common for their Western counterparts, within narrow limits defined by the state that tend to restrict dissenting voices … Civic engagement is encouraged in order to maintain regime stability rather than to increase participation. 59 In a similar vein, Zhang Yuanfeng's penetrating account also alludes to how the Chinese state is 'seeking to retain a degree of control over non-profit operations … in the highly constrained atmosphere of th[e] biggest remaining socialist country'. 60 The impact of this distinctive context is reflected in the indigenous CSOs' discourse which is revealed to be qualitatively different to that of international CSOs. Accordingly, as the following textual analysis confirms, the working hypothesis is proven. The circumscribed nature of civil society-state relations is evident in the fact that only five of the indigenous CSO reports make direct reference to issues of religious freedom in the UPR submissions. Of the latter, the discourse is largely uncritical and supportive of the status quo. This is in stark contrast to the excoriating criticism in the discourse of international CSOs (see below). Discourse analysis reveals that the indigenous CSOs' discourse often emulates CCP rhetoric. For example: It is the best time now for China to implement the right to freedom of religious belief set forth by the Constitution and the relevant religious policies formulated by the Chinese government … In general, it is an obvious fact that all religious groups, religious believers and nonbelievers get along well with each other, understand and forgive each other. With progress of its modernization, pluralism of its economic structure and diversification of its spiritual culture, China has provided more loose space for the development of various religions' cultures. 61 Weighed against the graphic accounts of suppression and violence, the 'indigenous' discourse provides a striking contrast. For example, various religions, religious sects and believers and non-believers in Tibet respect each other and live in harmony. The monks have established committees of democratic management through democratic elections to exercise independent management of religious affairs and arrange religious activities … This fully demonstrates that believers and laymen alike in Tibet have truly gained religious freedom and basic human rights. 62 There is limited evidence of 'indigenous' CSO criticality. Where it exists it is often muted in tone. (B). International CSOs In contrast, as the following analysis reveals, international CSOs' UPR submissions are caustic in their criticism of what they view as CCP suppression of religious freedom. When the discourse in the submissions is disaggregated by frame there is consistency in the lead frames across both UPR cycles (Table 1). In other words, the same three frames gain most attention in both the 2009 and 2013 CSO reports. This is significant because it shows the endurance of key rights issues, indicates limited progress and reflects continuing CSO dissatisfaction. Accordingly, the lead frame is 'denial of rights/freedoms' which accounts for just under a third of quasi-sentences overall (31.5%). The second is 'imprisonment/detention/detainment'; this accounts for just under a fifth of all references (19.2%). The third is 'shortcomings in legal matters' (11.2%). The nature of the discourse in relation to specific frames is now considered. The lead frame is 'denial of rights'. Reflecting what the CSOs view as a deepening problem, it is subject to increased attention over the first and second UPR cycles (rising from 27.9% to 35.5% of quasi-sentences). Textual analysis shows CSOs' awareness of institutional decoupling -(or the gap between state rhetoric and reality), and how the primacy of the CCP's socialist vision and desire to retain power shapes policy and practice. For example: Despite its official policy of respect for the freedom of religion, China's overarching concern is ensuring the adaption of religion in order to 'safeguard the security, honour and interests of the motherland', 65 a requirement which renders the freedom of religion illusionary. China requires that religious belief is practised in a way that accepts the leadership of the Party above all else. 66 The perceived denial of rights to 'unofficial' religious groups and ethnic minorities is also highlighted repeatedly. For example, one CSO describes, 'ongoing crackdowns against ethnic minorities, members of non-state-sanctioned religious groups, petitioners and 68 It continues, 'such acts go against the letter and spirit of the UN Charter, and violate every article of the UDHR and all international human rights treaties'. 69 Even for officially recognised faiths, some CSOs paint a picture of repression. For example, 'the Chinese authorities have imposed political and religious policies that that have been against the principles and practices of the Catholic faith, and they have gravely violated human rights … ' 70 Over the first and second UPR cycles there is also increased attention to the second frame, 'detention/imprisonment' (rising from 17.8% to 20.8% of quasi-sentences). The CSO discourse emphasises that detention and imprisonment are not exceptional, but widespread phenomena. One CSO sought to quantify its prevalence: the Chinese Government regularly arrests and imprisons religious adherents who, in turn, claim that such arrests were based on their religious practices … According to the Law Yearbook of China, 8,224 cases of disturbing the social order or cheating by the use of superstition were filed. 71 Others underlined the impact of detention on individual faiths. For example: 'China's human rights record is one of the worst in the world … there are more Christians in prison in China than any other country in the world. The only legal churches are those strictly controlled by the government'. 72 However, it is the detention of Falun Gong followers that receives most attention in the UPR submissions. For example, 'reflecting a continued [state] commitment to wipe out the practice, the CCP launches regular, nationwide efforts to eradicate FLG through propaganda, imprisonment, torture, and forced conversion'. 73 Notably, CSOs underline how domestic law (inter alia, security laws and, failure to register religious groups) are used as a pretext for detention. For example, one CSO noted, 'China has used the Criminal Law … to justify holding prisoners under house arrest or in undisclosed locations even after they have completed their sentence'. 74 The discourse also details CSOs' concern that detention not only affects individuals directly, but also those that represent them in law. For example, one cited the case of a lawyer 'known for his work in defence of Falun Gong practitioners and religious rights … detained for almost two months. His whereabouts are unclear'. 75 A further core trope is the situation in PRC-administered Tibet. For example, one CSO argued that, monks and nuns make up approximately 58 per cent of the political prisoner population … [Adding that …] 824 Tibetan political or religious prisoners [are] believed to be currently detained or imprisoned. Of these 824 Tibetans, 479 are monks, nuns, or reincarnate lamas. 76 The third frame, 'shortcomings in legal matters', is also subject to increased attention over the first and second UPR cycles (rising from 9.1% to 13.4% of references). A core strand of the discourse is CSOs' view that legal and constitutional guarantees of religious freedom are not being upheld. For example, one asserted that 'religious freedom abuses in China … primarily result from the government's failure to enforce religious freedom guarantees and the prevalence of religiously motivated violence … the Chinese government continues to perpetrate religious abuses on a variety of religious groups'. 77 The state's failure to ratify the International Covenant on Civil and Political Rights (ICCPR) is also subject to repeated criticism. For example, one CSO argued that the absence of ratification 'should not stop China from protecting and preventing any possible violation of human rights, [yet …] unlawful practice by the court is a common phenomenon'. 78 Other CSOs highlighted a paradox whereby, they argued, the contemporary prevalence of religious persecution was itself a barrier to ICCPR ratification. One observed, it 'would also be impeded by the continued widespread criminal prosecutions of individuals for exercising their rights to free expression, association and assembly, [and] to freedom of religion and belief'. 79 Further core tropes under the 'legal matters' frame include arbitrary procedures and the absence of due process. For example, one CSO referred to how 'the ability to practice and express religious faith is hindered by inconsistent local enforcement of the laws [on religious freedom]'. 80 Another alluded to how some members of religious groups are 'not prosecuted under the criminal justice system … essential procedural safeguards must be upheld: laws should not be used to punish people on the basis of their "anti-social" behaviour as assessed by non-judicial bodies'. 81 Attention to the fourth frame '(re-)education' declined slightly over the UPR cycles (from 9.1% to 5.8% of quasi-sentences). The frame has two related strands: 'regular' education (formal and informal, comprising schooling as well as higher education); and reeducation. The latter is a punitive, corrective process instigated by the state and intended to make individuals revise their attitudes and beliefs to be compliant with CCP mores. The CSO discourse recounts how the latter practice also applies to lawyers defending individuals falling foul of the governing authorities. As one CSO notes, 'the Re-education through Labour system has been used to facilitate the incarceration of … human rights defenders, and individuals who practice their religion outside official channels'. 82 Again, the UPR discourse underlines the particular patterns and processes of alleged repression in Tibet. For example: patriotic re-education (PRE), a compulsory programme, which aims to quash loyalty to the Dalai Lama and Tibetan nationalist feelings … It seeks to change fundamental elements of thought, conscience and religious belief. Historically, patriotic re-education campaigns were aimed at monasteries and nunneries, but it has been extended to schools, institutions of higher education and locations of protest since 2008. 83 A further core trope is re-education of Falun Gong followers. For example: 'the authorities operate hundreds of "Legal Education Training Centres" across the country, often referred to as "brainwashing centres", designed specifically for the "transformation" of Falun Gong practitioners, where they are coerced into renouncing their beliefs'. 84 The 'torture/beatings' frame has been subject to increased CSO attention over the two UPR cycles (from 6.1% to 7.0% of references). Such discourse is extensive and applied to a range of religious groups, including state-recognised faiths. For example, one CSO alluded to the fact that, in its view, 'urgent human rights concerns in the PRC include … forced confessions and torture in the justice system … persecution of religious believers who refuse to join state-controlled churches'. 85 However, Falun Gong followers are the most prominent group identified as being at risk of such practices. For example, one CSO referred graphically to how 'practitioners arrested have suffered several forms of torture, including beatings, electric baton shocks, hanging for hours to days, deprivation of sleep for days … ' 86 In a similar vein, another alluded to the fact that the use of torture against FLG practitioners in China remains widespread and systematic. Reports … continue to be received from contacts in China on a daily basis. Torture is used primarily for the purpose of forced religious conversion, as well as to extract information on the whereabouts and activities of other individuals. 87 A further strand under this frame concerned what CSOs viewed as the obvious 'disconnect' between treaty obligations and contemporary practice. Notably, one underlined how, China's active seeking of a reservation from the UN Convention against Torture has allowed the violent practice to endure: political activists, religious minorities, and women continue to be subjects of torture and other persecution … China ratified the Convention against Torture on 4th October 1988, [but] with a reservation to Article 20. Under this reservation the Chinese Government does not authorize the Committee against Torture to investigate allegations of torture in China. 88 In the case of the sixth frame, 'persecution/oppression', the discourse paints a worrying picture. One CSO observed, 'the Chinese authorities continue to criminally punish and to use illegal, arbitrary and violent methods to intimidate and persecute individuals for the peaceful exercise of their … right to freedom of expression, religion, belief, association and assembly'. 89 Others alluded to the way that state officials clamp down on unregistered groups including quasi-religions or cults: 'the need for national security is being used as a pretext for religious persecution … ' 90 The comments of CSOs also underline the lack of enforcement of policy of the UN rights framework. For example, expressing frustration over the absence of progress since the First Cycle, one opined, the 'Chinese government persecutes religious practitioners and political dissidents … in 2009, the first UPR cycle reviewed China … However, [named cases] have lasted for 14 years, the persecution still occurs constantly'. 91 CSOs' negative assessment of the denial of religious freedom in the PRC is evident in their discourse under the state 'control/restriction' frame. As one CSO complained, China only allows groups registered with the government … to legally hold worship services. [Some …] religious groups are not permitted to register as legal entities, while some [other] religious and spiritual groups are outlawed completely. Proselytizing in public or unregistered places of worship is forbidden. 92 Thus, the discourse describes a societal context in which 'unregistered religious groups face intense pressure to register, and suffer severe consequences if they refuse'. 93 State monitoring is widespread and systemic in nature; as the following example reveals: 'the laws stipulate that new religious centres may only be developed with state permission through a registration process. This process allows the state to monitor religious activities … Such state-led intervention is contradictory to an atmosphere of religious freedom'. 94 Others contend that, far from liberalising the situation, the 2005 Regulations on Religious Affairs have allowed the state to tighten some aspects of its control over religion, despite official claims to the contrary. It allows for local officials to arbitrarily arrest believers, close places of worship, and place restrictions on the movement and action of clergy. 95 The plight of ethnic groups has gained increasing attention over UPR cycles. The CSO submissions set out why, in their view, minorities are subject to religious oppression. For example, one lamented that, 'the Government's … identification of Tibetans, Uyghurs, and Mongols' assertions of cultural, religious, or ethnic identity, as separatist or splittist, compounds the discrimination against, and disenfranchisement of, these ethnic groups'. 96 The UPR discourse describes numerous cases of oppression, as typified by these examples: 'a Uyghur and Christian house church leader, was arrested and his family was told it was a "national security issue" [… whilst] another house church leader in Xinjiang was detained, this time for "inciting separatism"'. 97 In turn, China's management of Tibet is founded on the CCP's position that, owing to its link to the Dalai Lama, religious belief is antagonistic to both socialism and the Chinese state. It is a view captured in the discourse of several CSOs. For example, one noted that: 'in Tibetan areas, the government has responded to a string of 101 self-immolations protesting repressive policies since February 2009 with increased restrictions on movements, communication, expression and religion'. 98 Moreover, the CSOs highlighted the continuing and deepening nature of the problem: 'there is clear evidence … that there is a direct correlation between the self-immolations and unrest in Tibet and an intensified campaign against the Dalai Lama combined with the expansion of legal measures tightening state control over Tibetan religion'. 99 The civil society discourse also highlights legal measures taken by the Chinese state that directly contradict the account it gave in its submission to the UPR about religious succession (see above). For example, one CSO noted that the, '[Tibetan Autonomous Region] TAR Measures for Implementation of the Regulations on Religious Affairs' was passed, placing the responsibility for picking and educating all future Panchen lamas … in the hands of the government. This effectively gives them control over the future leadership of the religion. 100 Somewhat forlornly, the CSO discourse also appeals for a stronger monitoring role for the UN. For example, Area of Concern: Compliance with United Nations Human Rights Mechanisms. China must meet agreements for other [UN] Special Rapporteurs to visit, including the Special Rapporteur on Religious Freedom, and accept that such visits must include Tibet as an area of specific concern. 101 The case of Tibet also illustrates CSOs' use of the next frame: 'protest'. Thus, for example, one observed that, 'sweeping new measures [have been] introduced … to purge monasteries of monks and restrict religious practice in the wake of protests across the plateau [… such developments] reveal a systematic new attack on Tibetan Buddhism that is reminiscent of the Cultural Revolution'. 102 Another recounted that: 'the government, deploying large numbers of security forces, did not distinguish between violent and peaceful demonstrators, and […] accused the Dalai Lama of being behind the protests'. 103 Article 7 of the UNCHR proscribes 'discrimination'. CSO submissions over both UPR cycles are framed in a way that highlights the widespread and intersectional nature of faith-related discrimination in the PRC. 'Intersectionality' here refers to the need for more sophisticated policy responses that deal with discrimination stemming from the intersection between multiple, simultaneous protected characteristics (for example, gender and faith, ethnicity and faith, etc.). 104 Notwithstanding its pervasive nature, there is no mention of it in Chinese government UPR reports. In contrast, it receives extensive attention in the civil society discourse. One strand concentrates on discrimination in the workplace and public sphere. For example, discrimination is also present in Xinjiang's administrative and business employment sector, in which the 'distinctive' religious, dietary, and linguistic characteristics of Muslims are used as a pretext to deny them access to positions of responsibility on the grounds that the employing unit is 'inadequately equipped' to meet their special needs. 105 The intersectionality of gender, religious belief and ethnicity are also prominent tropes. For example, 'Tibetan women live under severe restrictions to their political, religious, reproductive, and social freedoms. There is a severe lack of fundamental human rights, despite the establishment of the Beijing Platform for Action (BPfA) in 1995'. 106 As with the other frames, the discourse underlines CSOs' deep-felt frustration at the lack of progress between UPR cycles. For example, China's Promotion of Religious Discrimination … The 2009 UPR expressed concern that 'Chinese officials continue to repress religious activities considered to be outside the Statecontrolled religious system'. This concern is prominent throughout the … 2009 UPR … However, the Working Group did not address China's religious repression in any of its forty-two recommendations. 107 The discourse under the '(freedom of) movement/assembly' frame details CSOs' views on the Chinese state's use of restrictions on believers' mobility as a form of control and religious oppression. The diverse accounts illustrate how this applies across religious groups. For example, 'prominent church leaders of a main house church in Gansu Province remain detained after Chinese security forces raided a worship service … apparently on charges of "gathering in an illegal assembly under the guise of religion"'. 108 In the case of followers of Islam, 'the Chinese government has instituted controls over … what version of the Koran and other religious texts may be used, where religious gatherings may be held, and what may be said on religious occasions'. 109 A final strand of the discourse details CSO views on state surveillance of religious group members. The widespread nature of the practice is a core trope in the discourse. For example, one UPR submission refers to how 'FLG practitioners throughout China continue to be subjected to systematic surveillance of their movements, arbitrary searches of their homes, and monitoring of private communications'. 110 Accounts also detail how associates of individual believers are targeted by the authorities. For example, one account recalls the treatment of a human rights lawyer, stating that since 'he sent out three open letters to President Hu Jintao and Premier Wen Jiabao in 2005 demanding the government stop oppressing liberal religious believers … His family has been under severe surveillance'. 111 The discourse also details how state surveillance extends to new media. For example, one CSO recounted how 'several internet platforms were set up … but were closed one by one … online links are being kept under surveillance: the informations [sic] filtered, messages deleted, online chat and blogs blocked'. 112 Discussion The foregoing makes an original contribution in two respects: (1) by showing how indigenous Chinese CSOs' input to the UPR is limited. Inter alia, their voice is muted and some merely mirror the rhetoric of the ruling CCP. (2) Revealing that, in contrast, international CSOs are highly critical of what they see as state failure to uphold religious freedom. The lead frames in their UPR discourse include: denial of rights, imprisonment, legal failings, (re-)education, torture, and persecution. Whilst regime theory suggests that, as with other international agreements, human rights treaties are signed with the bona fide intention that a country will implement their provisions to benefit its citizens. 113 The present findings suggest this is a view that must be treated with caution. It downplays the reality that ruling elites may sign without full intent to comply. Rather they do so in order 'to appease a domestic or international constituency'. 114 In addition, two further, (non-discrete) factors are pivotal: the strength of civil society and international enforcement of human rights. In the latter regard, UN treaties have notoriously weak policy leversand sanctions for non-compliance are limited. 115 Whilst, in the former case, as liberal internationalist theory underlines, 'improvement in human rights is typically more likely the more democratic the country … [In short,] ratification [of human rights treaties] is more beneficial the stronger a country's civil society is'. 116 The troubling upshot is that in 'autocratic regimes with weak civil society, [human rights treaty] ratification can be expected to have no effect and is even possibly associated with more rights violations'. 117 The result is that shaming is often the strongest mechanism of human rights enforcement, but international shaming is viewed by the PRC as 'interference in internal affairs'. Moreover, the malaise is compounded by the fact that, as noted, civil society in the PRC is weak and strongly contained. That said, it could be argued that the constitution of the PRC defends 'freedom of belief' (xinyang) and, in common with practice in other jurisdictions, religious activity requires regulation by the state. However, in the Chinese case the basic division is between inner belief and external activity (the latter defined as anything involving more than one person). One could conceivably construe the ICCPR in these terms. Yet, even this narrow interpretation is at odds with the UN rights frameworkfor, as the Office for the High Commissioner for Human Rights' General Comment 22 (the right to freedom of thought, conscience and religion (Art. 18): 30/07/93, CCPR/C/21/Rev.1/Add.4) makes clear, religion involves buildings, ceremonies, holidays, food and clothing. 118 Therefore it cannot be purely private. Thus the problem lies in the way that the PRC chooses to interpret its obligations. 'Normal' religious activity is precisely an 'activity within a norm, defined by the law'. Here the Chinese attitude is partly immemorial, reflecting earlier imperial practice, partly an over-borrowing of the notion of state sovereignty, yet crucially, without its other face, that is, human rights. Given that in general Chinese law was criminal law, it is hard for China to understand the sphere of civil law. In traditional China, family clans enjoyed a certain autonomy within the state but were regulated by the rites and tradition. In the PRC, today this sphere no longer exists in its traditional form. When law steps in it does so as criminal law and as state law (interpreting civic behaviour in terms of loyalty and patriotism). Human rights law conflicts with this because it depends on a degree of civic vitality, and this may even clash with the law, certainly with civil law. In turn, this raises a further challenge: not just whether the PRC is democratic or autocratic, but whether it supports a vibrant civic society that is both law-governed (by civil law) and able to give scope to a certain degree of autonomy. Harold Laski refers to this as the 'federalism' of a state. 119 In other words, bodies within the state have their own specific autonomy. Until the CCP realises that this is not a threat to its political dominance, we will continue to see the clash between an international human rights perspective and a state discourse that only pays lip service to human rights. Accordingly, this study underlines that, in the absence of rights enforcement mechanisms, and in light of the disjuncture in CCP and international civil society organisations' UPR discourse, performativity and legitimation are a feature of contemporary rights practice in the PRC. In social theory terms, 'performativity' here is the 'reiteration of a norm or set of norms, and to the extent that it acquires an act-like status in the present, it conceals or dissimulates the conventions of which it is a repetition'. 120 In other words, through submissions to the UPR the government of the PRC appears to embrace civil society engagement and the promotion of religious freedoms in a way that advances political legitimacyor, the 'public basis of justification and appeals to free public reason, and hence to all citizens viewed as reasonable and rational'. 121 Whereas, the present critical analysis of international CSO data shows that 'legitimation' applies. This refers to 'communicative actions aimed at managing the public's perception that government actions are effective in promoting their desired ends, whether that is in fact true'. 122 Furthermore, analysis of the state discourse reveals instrumentalism and institutional decoupling to characterise the CCP's UPR submissions. In short, the ruling elite espouses the upholding of religious freedoms but acts to the contrary. In turn, all of this presents key challenges to CSOs as well as the wider international human rights community. It affirms the conclusion of leading analysis that effective international regimes are likely to emerge only where they have deep roots in the functional demands of groups in domestic and transnational society, as represented by the domestic political institutions [such as civil society] that mediate between society and the state. 123 This study shows how, in the case of the PRC, practice presently falls short of this. The functional demands of domestic groups are largely absent, and whilst the demands of international CSOs are clear and vociferous, they remain unaddressed. Instead, the ruling CCP elite continues to suppress discontent whilst at the same time administratively fulfilling its UPR obligations. At the outset of the twenty-first century, this combination of factors allows it to give primacy to retaining power and furthering what it dubs 'a framework of socialism with Chinese characteristics' at the expense of contemporary religious freedoms.
Application of Amorphous Calcium Phosphate Agents in the Prevention and Treatment of Enamel Demineralization Enamel demineralization, as a type of frequently-occurring dental problem that affects both the health and aesthetics of patients, is a concern for both dental professionals and patients. The main chemical composition of the enamel, hydroxyapatite, is easy to be dissolved under acid attack, resulting in the occurrence of enamel demineralization. Among agents for the preventing or treatment of enamel demineralization, amorphous calcium phosphate (ACP) has gradually become a focus of research. Based on the nonclassical crystallization theory, ACP can induce the formation of enamel-like hydroxyapatite and thereby achieve enamel remineralization. However, ACP has poor stability and tends to turn into hydroxyapatite in an aqueous solution resulting in the loss of remineralization ability. Therefore, ACP needs to be stabilized in an amorphous state before application. Herein, ACP stabilizers, including amelogenin and its analogs, casein phosphopeptides, polymers like chitosan derivatives, carboxymethylated PAMAM and polyelectrolytes, together with their mechanisms for stabilizing ACP are briefly reviewed. Scientific evidence supporting the remineralization ability of these ACP agents are introduced. Limitations of existing research and further prospects of ACP agents for clinical translation are also discussed. INTRODUCTION Enamel demineralization is one of the most common dental problems which could appear as white spot lesions (WSLs) in the early stage and even progress into cavities if effective interventions are not taken in time (Julien et al., 2013). In the normal oral environment, hydroxyapatite on the enamel surface contacts saliva and maintains the balance of dissolution and redeposition (Sollböhmer et al., 1995;Featherstone, 2004), hydroxyapatite could be dissolved into calcium and phosphorus ions while calcium and phosphorus ions in saliva could crystallize directionally and orderly, forming the enamel-like hydroxyapatite structure on the surface of the enamel (Dorozhkin, 1997). When the oral hygiene condition is poor, plaque biofilms form and adhere onto the enamel surface decomposing sugars, producing organic acids, and resulting in an acidic pH environment around the enamel. Under this circumstance, the dissolution-redeposition balance of hydroxyapatite is broken. The dissolution of hydroxyapatite occurs faster than the deposition of calcium and phosphorus ions, which eventually leads to the occurrence of enamel demineralization. In addition to the above strategies, the enamel biomimetic remineralization strategy, which bases on the natural enamel crystallization process (Cölfen and Mann, 2003), is being studied extensively due to its biomimetic mineralization capability (Chen et al., 2015;Wang et al., 2017). According to the nonclassical crystallization theory, the crystallization processes of natural enamel could be interpreted as the following steps: 1) Calcium and phosphorus ions aggregating together to form amorphous calcium phosphate (ACP); 2) Amelogenin stabilizing ACP into clusters; 3) ACP then directionally arranging to form bundles of hydroxyapatite, then gradually forming enamel crystal, and finally forming enamel prism (Beniash et al., 2009;Yang et al., 2010;Kwak et al., 2016). To mimic the crystallization process of natural enamel and to achieve remineralization of demineralized enamel, ACP needs to be stabilized and then crystallizes directionally and orderly to form the enamel-like hydroxyapatite. In this review, we mainly focused on how different agents stabilize ACP and their remineralization effects on enamel demineralization. AMELOGENIN AND ITS ANALOGS Amelogenin (Amel) plays an important role in the formation of natural enamel (Wright et al., 2011;Moradian-Oldak, 2012;Ruan and Moradian-Oldak, 2015). Amelogenin could interact with calcium and phosphorus ions through the tyrosine enrichment segment on its N-terminal and stabilize calcium and phosphorus ions to an amorphous state ( Figure 1A). The C-terminal of Amel could guide ACP to crystallize into hydroxyapatite directionally (Tsiourvas et al., 2015). There are studies using chitosan to load amelogenin and form the chitosan-amelogenin gel (CS-Amel gel) and using this gel system for reconstruction of demineralized enamel. The CS-Amel gel could stabilize calcium and phosphorus ions into ACP, guide ACP to form the enamel-like crystals which bind closely with natural enamel crystals (Ruan et al., 2013;Ruan et al., 2014). In addition to the direct application of amelogenin, there are studies focused on the remineralization effect of amelogenin analogs. Zhong et al., 2021 self-assembled the N-terminal tyrosine segment of amelogenin to form leucine-rich amelogenin peptide (LRAP) and evaluated the stabilizing and directional guiding abilities of LRAP to calcium and phosphorus ions in mineralizing solutions. LRAP could stabilize calcium and phosphorus ions into ACP effectively and guide ACP to grow along its C-axis into bundles of hydroxyapatite crystals. Wang combined a phase conversion lyase (PTL) which mimics the function of the N-terminal of amelogenin with a synthetic peptide chain which has the function of the C-terminal of amelogenin to form amyloid amelogenin analog (PTL/ C-AMG) . The PTL/C-AMG could combine calcium and phosphorus ions to form hydroxyapatite and promote the extension growth of hydroxyapatite crystals on the surface of natural enamel and eventually forms a highly ordered hydroxyapatite structure with mechanical properties similar to that of natural enamel. Lv and colleagues synthesized a short-chain polypeptide (QP5) based on the amino sequence of amelogenin and proved the stabilizing ability of QP5 to calcium and phosphorus ions. They verified the remineralization ability of QP5 to initial enamel demineralization in an in vitro enamel demineralization model and further confirmed its remineralization ability and potential for clinical transformation in a rat caries model (Lv et al., 2015;Han et al., 2017). CASEIN PHOSPHOPEPTIDES Casein phosphopeptides (CPP) are casein extracts from milk which could markedly increase the apparent solubility of calcium phosphate ions by forming ACP (Reeves and Latour, 1958). Researchers found that the main active sequence of CPP, the phosphoserine -glutamate cluster (-Ser(P)-Ser(P)-Ser(P)-Glu-Glu-), could stabilize calcium and phosphate ions and form the CPP stabilized ACP complex (CPP-ACP) (Adamson and Reynolds, 1996) to avoid the spontaneously crystallizing, phase conversing and precipitating of calcium and phosphorus ion (Shen et al., 2001) ( Figure 1B). Reynolds soaked artificially demineralized enamel in CPP-ACP solution and found that CPP-ACP could remineralize the subsurface demineralization of enamel effectively. The mechanism may be that CPP can maintain a high concentration of calcium and phosphorus ions in the solution to infiltrate into the subsurface lesion area to achieve efficient enamel remineralization (Reynolds, 1997). The team further validated the preventive effect of CPP-ACP on enamel demineralization in a rat caries model (Reynolds et al., 1995). With the U.S. Food and Drug Administration and other regulatory agencies confirming the biosafety of CPP-ACP (Cochrane et al., 2010), CPP-ACP is added into oral health care products such as Tooth Mousse (GC, Tokyo, Japan) (Rees et al., 2007) and Tooth Mousse Plus (CPP-ACPF, GC, Tokyo, Japan) (Hamba et al., 2011;Bataineh et al., 2017;Olgen et al., 2021). These agents have been gradually used in clinical practice and have been studied in a number of clinical trials Frontiers in Bioengineering and Biotechnology | www.frontiersin.org May 2022 | Volume 10 | Article 853436 (Sitthisettapong et al., 2015;Güçlü et al., 2016;Munjal et al., 2016;Thierens et al., 2019). However, the remineralization ability of CPP-ACP and CPP-ACPF for WSLs remains unknown. Researchers suggested that CPP-ACP and CPP-ACPF may have the ability to prevent and treat WSLs, but their effects are not significantly greater than using fluoride agent alone (Pithon et al., 2019;Wang D et al., 2020). In addition, casein related allergy in certain populations also limits the clinical use of CPP-ACP and CPP-ACPF. POLYMERS In addition to the aforementioned amelogenin and its analogs and CPP, some kinds of polymers can also stabilize calcium and phosphate ions, including chitosan derivatives, polyamidoamine and polyelectrolytes. Chitosan Derivatives Chitosan derivatives, such as carboxymethyl chitosan (CMC) and phosphorylated chitosan (Pchi) could bind calcium ions through chelation reaction of carboxyl groups and calcium ions and then bind phosphate ions to form ACP ( Figure 1C). The recrystallization of demineralization enamel is realized by the ordered crystallization of ACP to form enamel-like hydroxyapatite crystals (Zhang et al., 2014;Zhang et al., 2018). Zhu combined carboxymethyl chitosan (CMC) and lysozyme (LYZ) to stabilize ACP and formed the CMC/LYZ-ACP nano-gel, which can regenerate prism-like remineralized enamel layer on the surface of eroded enamel to realize the remineralization (Zhu et al., 2021). Song successively added CaCl 2 and K 2 HPO 4 into Pchi solution to construct the Pchi-ACP nano-complex. X-ray diffraction and selective electron diffraction results confirmed the amorphous state of the nanocomplex. And the results of scanning electron microscopy and micro-CT proved that the Pchi-ACP nano-complex could realize the remineralization of demineralized enamel (Song et al., 2021). Poly-Amidoamine Poly-amidoamine (PAMAM) was first synthesized by Tomalia in the 1980s (Tomalia et al., 1985). PAMAM contains a large number of amide groups that have the similar function to peptide bonds so that PAMAM could simulate functions of a variety of proteins and peptides (Svenson and Tomalia, 2012). PAMAM can have mineralization property through the modification of carboxyl groups. The carboxyl-modified PAMAM (PAMAM-COOH) could combine calcium ions through carboxyl groups and further attract phosphate ions to stabilize calcium and phosphate ions into ACP (Khopade et al., 2002;Zhou et al., 2007;Zhou et al., 2013) ( Figure 1D). ACP could form enamel-like hydroxyapatite orderly on the surface of demineralized enamel through the crystallization guidance of PAMAM-COOH (Chen et al., 2013). Another study found that PAMAM-COOH can induce calcium and phosphorus ions to grow and crystallize along the z-axis on the surface of demineralized enamel, and the microhardness of the remineralized enamel is comparable to that of the natural enamel (Chen M et al., 2014). Polyelectrolytes Polyelectrolytes are a class of polymorphs with ionizable units, which could ionize into charged polymorphs and counter-ions with opposite charge in aqueous solution (Koetz and Kosmella, 2007) such as polyacrylic acid (PAA), polyallylamine (PAH), polyaspartic acid (PASP), et al. PAA has rich carboxyl groups to combine with calcium ions to form the -COO -/Ca 2+ structure (Huang et al., 2008), so that PAA can stabilize ACP (Gower, 2008;Dey et al., 2010) ( Figure 1E). Qi added calcium and phosphorus ions into the PAA solution to construct the PAA-ACP complex and verified the stability of the PAA-ACP complex by solution turbidity analysis and dynamic light scattering. Scanning electron microscopy, transmission electron microscopy, infrared spectroscopy and X-ray diffraction analyses proved the remineralization ability of PAA-ACP (Qi et al., 2018). Our group used PAA to stabilize amorphous calcium phosphate, and then loaded PAA-ACP with aminoated mesoporous silicon nanoparticle (aMSN) to form the PAA-ACP@aMSN delivery system. The PAA-ACP@ aMSN was proved to have the ability to promote enamel remineralization and surface microhardness analysis and X-ray diffraction analysis showed that the remineralization layer induced by PAA-ACP@aMSN had comparable mechanical property and crystal texture to natural enamel . PAA-ACP could also act as a dental adhesive filler to endow adhesives with enamel remineralization ability (Wang et al., 2018). Other polyelectrolytes like polyallylamine Yang et al., 2017), polyaspartic acid (Zhou, et al., 2021), polyglutamic acid (Sikirić, et al., 2009;Terauchi, et al., 2019) could alsostabilize calcium and phosphorus ions but the treatment or prevention effect of these polyelectrolytestabilized ACPs for enamel demineralization remains to be further investigated. ACP PARTICLES As a kind of amorphous substance, ACP is easy to spontaneously transform into apatite crystal in an aqueous solution from the thermodynamics point of view (Eanes et al., 1965;Chow et al., 1998). Therefore, in addition to the application of stabilizers to stabilize ACP in an amorphous state, another way to stabilize ACP is to store the prepared ACP in an anhydrous dry granular state to form ACP particles. Since the 1990s, ACP particle has been gradually used as a bioactive additive in the studies of tooth remineralization (Skrtic and Eanes, 1996). ACP particle could act as a bioactive filler of dental filling resin to endow the filling resin with the ability of continuous releasing of calcium and phosphorus ions to promote the formation of hydroxyapatite (Skrtic et al., 2004). However, the uncontrollable agglomeration of ACP particles in the resin affects the mechanical properties of resin such as bonding strength and bending strength, so that ACP particles are only suitable for materials with low requirements on mechanical properties, such as pit and fissure sealant (Skrtic et al., 2004;Dunn, 2007). In 2011, Xu synthesized nano ACP (NACP) by spray drying method for the first time and mixed it into dental resin as filler. The NACP modified dental resin could release calcium and phosphorus ions in an acidic environment, and the mechanical properties of the resin are even better than commercial dental resin materials (Xu et al., 2011). Since then, a large number of studies added NACP to dental materials such as orthodontic bonding resins, sealants, resin-modified glass ions and other materials, and verified their calcium and phosphorus ion release ability and enamel remineralization ability (Chen C et al., 2014;Ma et al., 2017;Liu et al., 2018;Xie et al., 2019;Gao et al., 2020;Ibrahim et al., 2020). ACP AGENTS VERSUS OTHER REMINERALIZATION AGENTS In addition to ACP agents, there are many other enamel remineralization agents such as fluorine containing agents, hydroxyapatite preparations and tricalcium phosphate. In vitro and fTCP and CPP-ACP seem to be more effective in reducing WSLs than 1000 ppm F containing toothpastes. Bhadoria et al. (2020) In vitro CPP-ACPF, fTCP Twice a day, 2 min per time for 10 days Microhardness tester fTCP shows significantly higher increase in mean microhardness when compared to CPP-ACPF and control group. f-TCP showed comparatively more remineralization potential than CPP-ACPF. Brochner et al. (2011) RCT CPP-ACP and fluoride containing toothpaste Once a day for 4 weeks QLF A statistically significant regression of the WSL was disclosed in both study groups compared to baseline, but there was no difference between the groups. The application of CPP-ACP could resulte in a reduced area of the lesions after 4 weeks but the improvement was however not superior to "natural" regression with daily use of fluoride toothpaste. Huang et al, RCT CPP-ACPF and PreviDent fluoride varnish CPP-ACPF group: twice a day for 8 weeks; Varnish group: a single application at the start of the study. Visual assessment The mean improvements assessed by the professional panel were 21%, 29%, and 27% in the CPP-ACPF, fluoride varnish, and control groups, respectively. CPP-ACPF and PreviDent fluoride varnish do not appear to be more effective than normal home care for improving the appearance of white spot lesions over an 8-week period. Akin and Basciftci, Clinical controlled trial 0.025% NaF rinse and CPP-ACP Following manufacturer's instructions after brushing teeth with F containing toothpaste for 6 months Image processing with AutoCAD for quantitative analysis The area of the white spot lesions decreased significantly in all groups. The success rate of CPP-ACP was significantly higher than that of NaF. The use of CPP-ACP can be more beneficial than fluoride rinse for postorthodontic Remineralization. Singh et al. RCT Fluoride toothpaste, fluoride varnish with fluoride toothpaste, CPP-ACP with fluoride toothpaste, Subjects were advised to brush twice daily with fluoride toothpaste for 1, 3,6 months. DIAGNOdent, Visual assessment The mean visual and DIAGNOdent scores at various time intervals of observations were decreased more when fluoride varnish and CPP-ACP were used in addition to daily use of fluoride toothpaste, but the differences were not statistically significant. The use of fluoride varnish and CPP-ACP in addition to twice daily use of fluoride toothpaste had no additional benefit in the remineralization of postorthodontic WSLs. in vivo studies have been conducted to compare the remineralization performance of ACP agents and other agents (Table 1). However, the conclusion varied among these studies. Some studies found that ACP agents have better remineralization effect than other agents, while others suggested that the remineralization effect of ACP agents is similar to or no better than other agents. Whether the ACP agents have better remineralization properties than other agents needs to be further investigated in the future research. DISCUSSION ACP agents have outstanding preventive and therapeutic capacity to enamel demineralization due to their ability to form the enamel-like hydroxyapatite on the surface of demineralized enamel (Kwak et al., 2016). However, Since ACP is easy to agglomerate and is unstable in an aqueous solution (Chow et al., 1998), the main challenge in applying the ACP for enamel remineralization is its stabilization. Many different materials that could stabilize calcium and phosphorus ions, including amelogenin and its analogs (Tsiourvas et al., 2015;Wang Y et al., 2020), casein phosphopeptides (Cross et al., 2005), polymers like chitosan derivatives (Zhu et al., 2021), carboxymethylated PAMAM (Chen et al., 2013) and polyelectrolytes , have been used in studies to stabilize calcium and phosphorus ions into ACP. Another strategy is to store the ACP in a water-free state so that ACP particles and NACP particles are formed (Betts et al., 1975;Xu et al., 2011). The remineralization abilities of these ACP agents have been confirmed in previous studies. However, except for CPP-ACP and CPP-ACPF which has been commercialized (Reise et al., 2021), most of the other ACP agents are still at in vitro experimental stage. Tt is still uncertain whether these ACP agents can achieve the remineralization of demineralized enamel in vivo. In addition, most of the studies evaluated the remineralization ability of ACP agents by measuring the hardness recovery of demineralized enamel (Gokkaya et al., 2020), observing the mineral deposition on demineralized enamel , or measuring the lesion depth (Soares-Yoshikawa et al., 2021). None of the above-mentioned evaluation methods can directly confirm whether ACP agents could form the enamel-like hydroxyapatite. The biomimetic remineralization ability of ACP agents needs further investigation. To further promote the translation of ACP agents into clinical application, basic studies with adequate evaluation methods as well as relevant in vivo studies are still needed. In addition, whether ACP agents have better remineralization effects compared to other agents remains to be further explored. ACP complexes are in the amorphous state of the liquid phase (Chen et al., 2013;Niu et al., 2017;Qi et al., 2018;Song et al., 2021), and ACP particles (Skrtic et al., 2004;Xu et al., 2011) are solid powders. Neither the liquid nor the solid form is convenient for storage and direct application in the oral environment. Studies has been conducted to address the storage and application challenges of ACP agents: 1) Mouthwash. Studies used carriers like chitosan (Ruan et al., 2013) and carboxymethyl chitosan (Zhu et al., 2021) to load ACP agents and these delivery systems can be applied in the oral environment in the form of mouthwash. 2) Toothpaste and tooth desensitizer. Another form of application of the ACP agents is to make them into pastes. Our group used mesoporous silicon nanoparticles to load ACP agents to achieve the enrichment and storage of ACP and this delivery system can be applied as the filler of toothpaste . CPP-ACP agents can be used as desensitizers in the form of pastes (Pei et al., 2013;Chandavarkar and Ram, 2015;Yang et al., 2018). 3) Resin product. Particulate forms of ACP have been incorporated into resin products, like adhesives (Wang et al., 2018), pit and fissure sealants (Utneja et al., 2018), varnishes (Schemehorn et al., 2011) to achieve convenient applications that do not depend on patient compliance. Mouthwash form of the ACP agent is convenient to use, but the relatively low concentration of ACP and its inability to persist on the enamel surface for long periods lead to the limited effectiveness of ACP agents to enamel remineralization. The paste-like application form could effectively enhance the concentration of ACP and could maintain a high concentration of ACP on the enamel surface during application, but like mouthwash, it still has a short duration of hydroxyapatite formation due to the effect of saliva flushing. ACP agents modified resin products can release ACP on the enamel surface for a long period thus achieving the long-term prevention or treatment of enamel demineralization. However, the effect of ACP agent incorporation on the performance of these products like mechanical performance and biocompatibility needs further exploration. And the long-term stability of the ACP release from these products should be considered in future studies. CONCLUSION Herein we summarize the strategies of stabilizing ACP. Calcium and phosphorus ions can be stabilized to the ACP state using a variety of methods, but the preventive and therapeutic effects of these ACP agents on enamel demineralization still await further investigation. There are three main forms of storage and application of ACP agents, namely mouthwash, toothpaste/tooth desensitizer, resin product. However, due to the shortcomings of the above-mentioned forms of ACP agents, more easy-to-use and long-lasting forms of ACP agents remain to be further explored. AUTHOR CONTRIBUTIONS JY drafted the manuscript. HY and TL revised the manuscript. FH and HH designed the work and revised the manuscript. All authors approved the final version to be published.
Supply-Demand Matching in Non-Cooperative Social Networks The complex supply-demand matching problem is a kind of social service computing problem, which can be applied to the coordinated production of products or the supply of service. In this scenario, the demander needs a number of suppliers to provide services or products to complete a given task. The key to solving this problem is to build a supply network that covers the requester’s requirement. Traditional collaboration issues in social network mainly focused on the “team formation problem”, that is to build a team that covers all the skills required for the task. However, due to the complex characteristics of supply-demand matching problems in the application of social services, the team formation method is limited and inefficient, and there is no special solution for the complex supply-demand matching problems in social network. This paper proposes a general framework to solve the complex matching problem of supply and demand. On the premise of non-cooperative constraints, social networks are used to build supply networks with low communication loss, and the unnecessary cost is reduced through cooperation. I. INTRODUCTION This paper studies the complex matching problem of supply and demand in Non-cooperative social networks (SNs). The demander wants to build a supply network that covers the task demand in both type and quantity. Unlike the previous ''team formation problem'' [1]- [11], we had to redesign the solution because of the following characteristics of the supply-demand matching problem: x The same service or product requirement in a task can be segmented to multiple providers to provide collaboratively; y Different from contributing skills, suppliers of services or products may face capacity caps; z A single supplier can supply multiple products and services to multiple demanders simultaneously. Besides, due to the selfishness of members in practical application [12], [13], the scheme must be feasible under the constraints of non-cooperation. At the same time, considering that individuals with closer social relations are more likely to have closer geographical and linguistic connections and a higher level of trust [14], [15], SNs will be used to select The associate editor coordinating the review of this manuscript and approving it for publication was Leyi Wei. suppliers with closer social relations with the employer, to reduce the cost of communication [16]- [20]. Although there are many ways to use SNs to form professional collaborative team [4], [21], [22], there are still some defects and some practices that are not applicable to this supply and demand problem: x these methods ignore the differences in social relationship quality caused by the differences in SNs structure. For example, the quality of two SNs composed of a complete graph and its minimum spanning tree cannot be generalized; y privacy policy leads to the imbalance of network information, which hinders a better employment plan (reduce the total cost of each task) in the whole SNs; z different from the professional cooperative team, many participants in the same supply network have no cooperative relationship, and they supply products to the demander independently. Therefore, an effective supply-demand matching method should evaluate the social relationship between the supplier and the demander, expand the supply network by using some information of member neighbors, and pay attention to global optimization. In this context, we design a complete set of supplydemand matching method, including x a set of distributed negotiation-based supply networks formation algorithm, which allows the supplier and the demander to decide whether to cooperate or not and agree on a quotation in line with the interests of both parties, so as to initially build a supply network covering the task demand; y preference algorithm, which combines the relationship between nodes in SNs, describes the impact of trust and communication issues on task cost; z cooperate algorithm, which is used to further reduce unnecessary cost loss and achieve the goal of global optimization after the initial construction of the supply network. The theoretical analysis shows that the proposed method can solve the complex supply and demand problem on the premise of satisfying members' selfishness and considering communication cost, and optimize the cost of individuals and groups at the same time. Finally, a series of experiments are set up to verify the improvement of this method compared with the traditional Contract Net (CN) method and adjust the parameters to simulate different task scenarios to observe the impact of condition changes on the performance of the algorithm. The experimental results show that: x compared with the traditional CN method, the method proposed in this paper can effectively reduce the cost; y the proposed method performs better in the scenario of low supply cost and small demand for a single product. II. RELATED WORK A. TEAM FORMATION PROBLEM As mentioned above, there are differences between ''supplydemand matching problem'' and ''team formation problem'', but current researches on the latter can provide references for the former. The goal of the ''team formation problem'' is to form a team of experts covering all the skills required for the task. Considering the evolution, emergent behavior, operational independence, and management independence of the systems, Lim and Ncube [22] developed a method of team building based on the stakeholders' recommendation, and proposes using social networks and crowdsourcing to identify and prioritize the stakeholders of systems projects. Taking into account the impact of social relations on team cooperation, Wang et al. [21] used the social neighborhood information of members to expand the connectivity graph to build a team and designed a mechanism based on distributed negotiation to improve social welfare. Different from the methods of searching experts in the whole SN, Sun et al. [8] proposed a team formation model that outsource tasks to social networks, and selected the list of centrality experts as the seed, so as to reduce the communication cost of the team and narrow the search space. Instead of targeting individuals, Chamberlain [2] first proposed the concept of Groupsourcing. In Groupsourcing, tasks are assigned to a group of people with different expertise who were connected through social networks. Compared with other methods, Groupsourcing offers a high-precision, data-driven, and low-cost method. In Groupsourcing, every time a complex task is published, a new team would not be formed from scratch to satisfy the skill requirements of the task. Based on Chamberlain's research, Jiang et al. [3] formally defined the context-aware task allocation problem in group-oriented crowdsourcing, proposed a heuristic context-aware task allocation approach, and proposed a modeling method for natural worker groups in crowdsourcing, including groups with and without leadership. Besides, instead of solving the problem from the requestor's perspective, Lykourentzou et al. [6] explored a ''team dating'' strategy, which is a self-organized group team formation method. In this method, employees try and evaluate different candidate partners. Rokicki et al. [7] explores a cohesive strategy that includes self-organization. In this strategy, workers (initially in the form of a one-man team) can decide which team they want to join or who could join their team. B. TEAM REVENUE OPTIMIZATION Common methods to improve team profitability include reducing personnel and communication costs, improving team members' quality, and Optimizing the rationality of task allocation. Complex tasks can be decomposed into smaller subtasks, which can be executed either sequentially or in parallel by workers. In order to build a high-quality team by rationalizing task requirements, Jiang and Matsubara [23] demonstrated the superiority of vertical task decomposition over horizontal task decomposition in improving the quality of the task's solution, and clearly explained the optimal vertical task decomposition strategies under two revenue sharing schemes, which maximized the quality of task solutions. Tran-thanh et al. [9] studied the issue of how to hire higher-quality experts on a limited budget, they redesigned the classic multi-armed Bandit (MAB) model to solve this problem. An algorithm called bounded ε-first was proposed, it uses the first εB of its total budget B to derive estimates of the workers' quality characteristics (exploration), while the remaining (1 -ε)B is used to maximize the total utility based on those estimates (exploitation). Tran-Thanh also developed another BugetFix algorithm [24], which determines the number of interdependent micro-tasks and the price to pay for each task given budget constraints. Moreover, BudgetFix provides quality guarantees on the accuracy of the output of each phase of a given workflow. Wolf et al. [25] were the first to realize that SNs plays an important role in teamwork, and social connections among social individuals might represent collaboration relationships(e.g., collaborate on common tasks previously). The advantage of using these SNs is that the social individuals who have worked together previously are estimated to work effectively as a team without much coordination overhead [16], [17]. Therefore, scholars consider establishing a cooperative team in SNs [18]- [20], so that the members of the team can form a connected graph, and work together effectively. Considering the selfishness of the members of SNs [12], [13], Wang et al. [21] further explored a team- VOLUME 8, 2020 building approach adapted to non-cooperative constraints. They model each individual as a selfish entity, using a negotiating mechanism to cut costs and improve social welfare. III. PROBLEM DESCRIPTION First, Social network SN=<A,E> is an unweighted undirected graph, where A={a 1 , a 2 ,. . . , a m } is the set of all member nodes in the graph, ∀(a i , a j ) ∈ E indicates that there is a social relationship between nodes a i and a j . ∀a i ∈ A is defined by 4-tuple <G(a i ), M(a i ), C(a i ), N(a i ) >. Where G(a i ) = {g 1 , g 2 ,. . . } represents the type of product that a i can supply; M(a i ) = {max(a i , g 1 ),. . . , max(a i , g |G(ai)| )} indicates the maximum upper limit of g j ∈ G(a i ) supplied by Task t is defined by 4-tuple <I t , G(t), R(t), E(t)>. Where I t is the t task requester; G(t) = {g 1 , g 2 ,. . . } indicates the type of all product supply needed to complete task t; R(t) = {r(t, g 1 ),. . . , r(t, g |G(t)| )} indicates the demand of task t for various product supply; E(t) = j=1,...,|G(t)| e(t, g j ), indicates the total value (not profit) that each product supply in task t can bring to I t . The supply network N t for task t is defined by <t, t , O(t), Us(t)>. Where t represents the set of employed members of N t ; O(t) = {(t, a i , g j , q(a i , g j , t), p(a i , g j , t), η(I t , a i )),. . . , (t, a p , g q , q(a p , g q , t), p(a p , g q , t), η(I t , a p ))} refers to the set of order details contained in N t , where q(a i , g j , t) refers to the quantity of g j supplied by a i for task t in the order. p(a i , g j , t) = q(a i , g j , t) * c(a i , g j ) indicates a i 's payment for supplying g j for task t, i.e. a i 's salary. η(I t , a i ) is the communication cost coefficient between I t and a i ; Us(t) = {us(t, g j , q us (t, g j )),. . . , us(t, g q , q us (t, g q ))} is the unmet requirement of task t, where q us (t, g j ) = r(t, g j )-(t,ai,gj,·,·,·)∈O(t) q(a i , g j , t) is the unmet requirement of g j in t. In particular, there are some concepts that will be used many times in this paper. Although they can be expressed by the above-mentioned defined symbols, the symbol definitions still gave for the convenience of use: µ(t, a i , g j ) = q(a i , g j , t)/r(t, g j ), µ(t, a i , g j ) ∈ (0, 1], which represents the ratio of the quantity of g j supplied by a i to r(t, g j ) in the contract of task t; λ(t, g j ) is the ratio of current remaining unallocated demand in t's sub demand g j to r(t, g j ). That is to say, λ(t, g j ) = 1− (t,ai,gj,·,·,·)∈O(t) q(a i , g j , t)/ r(t, g j ), λ(t, g j ) ∈ [0, 1]. th(t, g j ) is the threshold value to measure whether it is in its interest to employ a node to supply g j for task t. th(t, g j ) = E(t)/r(t, g j ). The problem is that given task t and SN=<A, E>, task initiator I t wants to build N t =< t , O(t),Ø>, so that task t is completely covered by N t in the type and quantity of requirements; N t should not contain redundant members, and each member should provide at least one product supply; reduce the cost as much as possible, in order to increase the revenue Pro(t) = E(t)-(t,ai,gj,·,·,·)∈O(t) q(a i , g j , t) * (1 + η(I t , a i )) * c(a i , g j ). The above symbol definitions are given in Table 1. IV. SUPPLY-DEMAND MATCHING IN NON-COOPERATIVE SOCIAL NETWORKS In this paper, a distributed negotiation-based mechanism is used to make equal decision-making between supply and demand sides driven by interests, and a supply network covering task demand is constructed. In this process, the social relations exposed by the expansion of supply network are quantified to evaluate the cost of communication between SN nodes, which will have an impact on employment results. Finally, after the initial construction of the supply network, it coordinates with other task requesters in SN, exchanges some members of the supply network on the premise of meeting the interests of both sides, and improves the individual and group benefits. The above process is completed by three algorithms respectively, and the relationship between them is shown in Figure 1. We will describe these three algorithms respectively. A. SUPPLY NETWORK CONSTRUCTION ALGORITHM Input the demand of SN and I t , and output a preliminary supply network in line with the interests of both the task supplier and demander. Before describing the algorithm, several role concepts should be first defined. Definition 1 (Freelancer, Contractor, Supplier): For a product g j , if the node a i with g j supply capability does not have g j order at a certain time, then a i is called freelancer; If a i has g j orders, but N t has not been completed and g j supply has not yet started, a i is called contractor; after the start of supply and until the end of supply, a i is called a supplier. 1) EXPANSION ALGORITHM This algorithm drives the expansion of supply network. Algorithm 1 (Expand the Supply Network) 1. Initialize t = I t , ∀g j ∈ G(t), q us (t, g j ) = r(t, g j ), For a x ∈ N(a i )∪a i 5. update N(t) and a x 8. Terminate this Algorithm 10. End for 11. End for The member initializing N(t) in step 1-2 only contains It, without any order, and the remaining unsatisfied demand equals the complete initial demand of t. Traverse each included member of the supply network Nt and their neighbor nodes (step 3-4), execute the decide algorithm (step 6) on it, and update the supply network Nt (step 7) according to the information returned by the decide algorithm until all the requirements of t are met (step 8-9). The algorithm involved in step 5 will be given in a ''preference algorithm'' later. 2) DECIDE ALGORITHM This algorithm describes the details of step 6 in algorithm 1 and completes the decision-making of both the supplier and the demander in the way of three-stage distributed negotiation. a: OFFER STAGE In this stage, I t issues an order offer to the nodes that meet the requirements. Algorithm 2 (Decide-Offer Algorithm) / * Q temp is the task amount of a i in the negotiation; P temp is a i 's order compensation in the negotiation * / 1. initialize th(t, g j ) = E(t)/r(t, g j ) 2. for each g j ∈ G(a i ) 3. if c(a i , g j ) * (1+η(I t , a i ))≤th(t, g j ) 4. if λ(t, g j ) = 1 5. P temp = Q * temp c(a i , g j ) 8. send offer(t, a i , g j , Q temp , P temp ) 9. P temp = Q * temp c(a i , g j ) 13. send offer(t, a i , g j , Q temp , P temp ) 14. end for Step 1 Define the threshold th to measure whether an employment is in I t 's interest. Traverse each product supply capacity g j of a i to determine whether its unit cost (if there is no special description, the ''cost'' in this paper includes product cost and communication cost) is less than th (Step 2-3). If it is, it will send g j order offer to a i in a ''saturated'' way. Its ''saturation'' is reflected in: x when the g j demand of task t has not been allocated, the order quantity in offer is all g j demand (Step 4-8); y when the g j demand has been allocated, it tends to meet all the remaining unallocated g j demand first, and then replace all the allocated orders (Step 9-13) in current N t with higher cost than a i . b: RESPONSE STAGE In this stage, a i responds to I t 's offer according to its own situation. Several concepts of capacity should be clarified. Definition 2 (Free Capacity, Locked Capacity, Forbidden Capacity): For product g j , the surplus capacity of g j owned by freelancer is called free capacity; the g j capacity of the contractor is called locked capacity; the g j capacity of the supplier is called forbidden capacity. Due to a supplier can only supply a particular product to one demander at a time, only the free capacity can be provided to any demander by supplier freely. a i responds according to its g j capacity type after receiving offer: • Free capacity. After modifying the order quantity Q temp to the smaller value of ''its remaining gj supply capacity max(a i , g j )'' and ''the order quantity Q temp suggested by I t in offer'', a i make a positive response, agreement(t, a i , g j , Q temp , P temp ), to accept the offer; • Locked capacity. As an important basis for the follow-up ''coordination algorithm'', a i responds to I t with mark(t, a i , g j , Q temp , P temp , t 2 , q(a i , g j , t 2 )) to declare '' the quantity of I t 's orders that could have been accepted by a i if a i have not been employed by I t2 '', that is, min[Q temp , max(a i , g j )+q(a i , g j , t 2 )]; • Forbidden capacity. Make a negative response to refuse the offer. c: CONFIRM STAGE At this stage, I t determines the result of this decision according to the response content of a i : • If the response is agreement(t, a i , g j , Q temp , P temp ), a i will be allocated to supply the unmet g j demand λ (t, g j ) in N t first; if Q temp can cover λ(t, g j ) and there is still surplus, continue to replace (or partially replace) the high-cost g j orders in current N t in the order of ''high-cost orders → low-cost orders'' • If the response is mark(t, a i , g j , Q temp , P temp , t 2 , q(a i , g j , t 2 )), record this mark for subsequent ''coordination algorithm''; • If the response is refusal, the employment of a i to supply g j will be abandoned. B. PREFERENCE ALGORITHM Preference algorithm describes the impact of trust and communication problems on task cost according to the relationship between nodes in SNs. The key lies in: • determine the optimal precursor; • calculate the shortest distance; • design an appropriate preference function to evaluate the communication loss according to the shortest distance. Definition 3 (Previous Supply Network, Precursor, Optimal Precursor, Shortest Distance): Previous supply network (pre_N t ) of task t is a network that includes all the nodes that have been employed in the construction process of N t and their precursors; In pre_ N t , if the demander I t can access a i 's direct neighbor a x through a i , then a i is called the ''precursor'' of a x , which is recorded as a i ∈ pre(t, a x ); Among the precursors of a x , the one with the shortest distance from I t is called ''optimal precursor''--opre (t, a x );In Pre_N t , the shortest distance Dist(t, a x ) between I t and a x is n if I t can be reached by a x after a x visit optimal precursor iteratively n times. 1) DETERMINE THE OPTIMAL PRECURSOR Step 5 of algorithm 1 gives the time to determine the optimal precursor of the node, which is determined by method setOpre(a i , a x ). First, judge whether the visited node in Step 5 of algorithm 1 is its own precursor, if it is, do nothing (step 1-2); if not, record a i as the precursor of a x , traverse the precursor set of a x to update a x 's optimal precursor in task t (step 3-11). In this process, if the optimal precursor of a x changes to Algorithm 3 (Determine the Optimal Precursor, setOpre(a i , a x )) 1. if a i == a x 2. do nothing 3. else 4. pre(t, a x ) = pre(t, a x )∪a i 5. for each a y in pre(t, a x ) 6. do nothing 11. end for a i , the shortest distance between a x and I t is recalculated by calDist(t, a x ) (step 8). 2) CALCULATE THE SHORTEST DISTANCE Step 8 of algorithm 3 shows when to calculate the shortest distance between a node and I t . The calDist(t, a x ) method is used to calculate: set a pointer at a x , move the pointer in the direction of the optimal precursor of the current node until it reaches I t . Then the number of the pointer moves are taken as the shortest distance between a x and I t . Definition 3 (Communication Cost Coefficient, Upper Limit of Communication Cost Coefficient): The communication cost coefficient η(I t , a i ) is calculated by the preference function, which describes the additional cost loss when network members cooperate; the upper limit of communication cost coefficient η max is set because the communication loss will not increase endlessly with the increase of social relationship distance, and it must converge to a certain upper limit, which needs to be determined according to the actual application research. For example, the supply of precision instrument parts, such as aircraft instruments, requires more communication to ensure compliance with specifications and quality requirements, so the communication cost and η max are both higher; for orders such as food packaging bags, η max is lower because there is no need for many additional communication costs. The preference function determines the communication cost coefficient between nodes according to the social relationship, which should meet the following constraints: x the function should be in a positive proportion to the shortest distance between nodes; y for the nodes with a long social distance, the change of communication cost caused by the increase of distance is no longer obvious, so the growth rate of function should decrease with the increase of distance, that is, the differential coefficient of the function decreases monotonically. z According to the theory of six degrees of separation, there are no more than six intermediate nodes between two points in social network, so the communication cost coefficient should be close to the upper limit when the distance is 6. With these constraints, it is found that the functione −x + 1 can meet the above requirements well: the function monotonically increases on [0, +∞], the derivative gradually decreases to 0, the function finally converges to 1, and approaches the upper limit of the function when x = 6. The function image is shown in Figure 2: In addition, to meet the requirement of upper limit η max , the preference function is defined as η(I t , a i ) = η * max (−e −x + 1), where x represents the shortest distance Dist(t, a i ) between a i and I t . C. COORDINATION ALGORITHM In order to optimize the high-cost supply network caused by SNs privacy, the coordination algorithm coordinates the requesters with unbalanced social resources and exchanges their rights to employ the contractors, so as to improve the individual and group benefits at the same time. The coordination algorithm is proposed based on this situation: I t receives a 1 's response, mark (t, a i , g j , Q temp , P temp , t 2 ), during the construction of N t , I t has to give up a 1 temporarily. After I t builds N t , I t finds that if a 1 is employed to supply g j , cost C 1 could be saved compared with current N t ; and for contractor a 1 's employer I t2 , if there were other alternative suppliers besides a 1 to complete this part of g j supply, and the increased cost after substitution C 2 is smaller than C 1 , there would be room for coordination between I t and I t2 . The execution time of coordination algorithm is after I t completes ''supply network construction algorithm''. It traverses every mark response I t receives in the process of N t construction, executes coordination algorithm, and completes the coordination of two requesters in the way of three-stage distributed negotiation. 1) REQUEST STAGE In this stage, I t decides whether and how to issue a coordination request to I t2 based on its own situation and the content of mark. Based on the order quantity that a i responded in mark, I t updates N t by ''replacing the contractors whose unit supply cost of g j is higher than that of a i in the order of cost from Algorithm 4 (Cooperate-Request Algorithm) 1.The nodes a exp in {a exp |c(a exp , g j ) * (1+η(I t , a exp ))> c(a i , g j ) * (1+η(I t , a i ))&&a exp ∈ t } are arranged in descending order according to the size of c(a exp , g j ) * (1+η(I t , a exp )), and the µ(t, a exp , g j ) of the arranged a exp are filled in the list in turn. 2. µ'(t, a i , g j ) = Q temp /r(t, g j ) 3. x = 0 4. C pre = 0 5. if list is empty 6. do nothing 7. else if µ'(t, a i , g j ) <= list(x) 8. C 1 = Q * temp (c(a exp , g j ) * (1+η(I t , a exp )-c(a i , g j ) * (1+η(I t , a i )) / * a exp makes µ(t, a exp , g j ) == list(x) * / 9. do nothing high to low'' by means of simulation. The replaced g j order quantity Q temp is compared with the order quantity q(a i , g j , t 2 ) of a i at I t2 . If Q temp >= q(a i , g j , t 2 ), calculate the cost C 1 that can be saved by the new N t after simulation compared with the original N t , and issue a coordination request (step 1-22); otherwise, give up the coordination (step 23-24) (this is because the coordination will reduce the order quantity of a i , which is not in line with the interests of a i and the principle of non-cooperation). 2) JUDGE STAGE when I t2 receives the request (I t , I t2 , a i , g j , Q temp , C 1 ), it judges whether and how to accept the coordination according to its own situation. The judgment basis includes: • whether N t2 has started to supply; • whether I t2 can find other nodes with g j supply capacity to meet the order quantity q(a i , g j , t 2 ) to replace with; VOLUME 8, 2020 • whether the cost loss C 2 of I t2 caused by a i replacement is less than the cost C 1 saved by I t . Terminate this Algorithm 14. end for 15. agree(I t , I t2 , g j , a i , Contribution) If N t2 has started to supply, I t2 will directly reject the coordination request of I t (step 3-4), otherwise, it will try to find a node that can replace a i to supply g j (step 5-17). The PreDecide method is used to meet the g j requirements of q(a i , g j , t 2 ) (step 10) (the method will be detailed below) until all requirements are met (step [15][16]. If λ temp (t, g j ) >0, it indicates that the a i order cannot be replaced successfully, the coordination request is rejected (step [18][19]. Otherwise, the lost cost C 2 when a i is replaced is calculated (step 21). When C 2 >= C 1 , the coordination request is rejected. When C 2 < C 1 , the coordination request is agreed and the contribution request between [C 2 , C 1 ) is proposed (step [22][23][24][25]. The PreDecide method in step 10 is similar to the decide algorithm in ''supply network construction algorithm'', in which the demander sends out an offer and the node who receives the offer makes a response. However, it has the following differences: • It is called ''Pre-Decide'' because it seeks only alternative nodes and does not include the actual hiring process known as the confirm stage. • Since contributions can be requested to I t to cover losses, PreDecide are made regardless of cost and do not require threshold th limits. • Since ''locked capacity'' and ''forbidden capacity'' cannot meet the coordination needs immediately, it is not necessary to distinguish between the two types of capacity in the response stage of PreDecide, only ''free capacity'' can be used for replacement. 3) CONFIRM STAGE At this stage, I t receives the judge result from I t2 for its coordination request, and makes different actions according to different results. • If the result is agree(I t , I t2 , g j , a i , Contribution): I t2 cancels all g j orders of a i , allocates the canceled orders to the nodes that make positive response in I t2 's PreDecide algorithm, completes the change of supply network N t2 . I t gets the right to employ a i , replaces the high-cost g j orders of N t in algorithm 8, and hires a i to provide g j (the quantity of the supply is Q temp ), and pays I t2 contribution, ConMoney, as the compensation for coordination. • If I t receives I t2 's response as refuse: Abandon this coordination. V. VERIFICATION AND CONCLUSION The performance difference between the proposed algorithm and the traditional CN model is verified by a series of experiments with task cost as a measurement index. A. DATA SET In order to observe the performance of the algorithm under different conditions, specific parameter combination is used instead of the actual data set. Three groups of experiments were set up, and each group was executed 100 times. The parameter distribution is shown in Table 2. Value1∼3 simulated the normal scenario, high-threshold scenario and high-demand scenario respectively. The parameters of each independent experiment were obtained from the normal distribution. The values in table are the expectations and the standard deviation is 1. B. COMPARISON MODEL AND EVALUATION INDEX • Algorithm in this paper • Traditional CN method: in the process of supply network expansion, I t only employs a i to supply g j under the condition that the g j demand of N t has not been met and the g j cost of a i is lower than the threshold th. Compared to this method, our model's advantage is the ability to continually update the supply network to select the lowest cost suppliers and the opportunity to coordinate requesters to optimize the supply network. C. EVALUATION CRITERIA The cost of task t is taken as the measurement index of the algorithm. D. EXPERIMENTAL RESULT The experiment result is shown in Figure 3∼5: Experimental results show that the proposed method has better performance than the traditional CN model. Although the performance differences between the two algorithms is not stable, the average cost of the three groups of experiments with different parameter settings was reduced by 5.60%, 8.26% and 2.57% respectively compared with the CN model. The following conclusions can be drawn: • With the increase of threshold value, the compensation cost of the two models will increase at the same time, but the advantage of the algorithm in this paper is more significant. • With the increase of task demand, the salary cost of the two models will expand at the same time, but the cost difference between the two models will remain unchanged in numerical value, which makes the performance advantage brought of the algorithm no longer obvious. • The algorithm proposed in this paper performs better in the scenario of high measurement threshold and low demand for a single kind of product. SHAO-JIE ZHANG was born in 1998. He received the bachelor's degree from the School of Software, Shandong University, in 2020. He is currently pursuing the master's degree in artificial intelligence with The University of Manchester. He has participated in four national, provincial, and ministerial level projects. His research interests include multiagent systems, machine learning, and reinforcement learning. He is a member of CCF. XU-DONG LU received the master's degree from the Department of Computer Science and Technology, Shandong University, in 2001, and the Ph.D. degree in engineering from Shandong University, in 2010. He is currently a Lecturer with the School of Software, Shandong University. He has participated in more than ten national, provincial, and ministerial level projects. He has published more than ten articles in important journals or conferences. He holds more than ten patents for invention in application field. His main research interests include big data technology and intelligent data analysis. He received one First Class Prize for the Ministry of Education Teaching Achievement Award and one Second Class Prize for the Teaching Achievement Award of Shandong Province. SHI-PENG WANG was born in 1995. He received the bachelor's degree from the School of Computer Science and Technology, Shandong University, in 2017, where he is currently pursuing the Ph.D. degree. His research interests include reinforcement learning, multiagent systems, and mental model. WEI GUO was born in 1978. He received the master's degree from the Department of Computer Science and Technology, Shandong University, in 2005, and the Ph.D. degree in engineering from Shandong University, in 2015. He is currently an Engineer and a master's supervisor. He has participated in more than ten national, provincial, and ministerial level projects. He has published more than 20 articles in important journals or conferences. He has published an academic monograph. He holds more than 10 patents for invention in application field, of which seven are first invention applicants. His current research interests include big data technology and intelligent data analysis. He is with the Committee of CCF TCSC and CCF TCCC. He was a recipient of several prizes for the Scientific and Technology Progress (STP), including one Second Class Prize for the STP of State Education Ministry of China, one First Class Prize, and two Second Class Prizes for the STP of Shandong Province. He was also a recipient of one First Class Prize for the Ministry of Education Teaching Achievement Award and one Second Class Prize for the Teaching Achievement Award of Shandong Province.
RESEARCH ON HIGH RESOLUTION REMOTE SENSING IMAGE CLASSIFICATION BASED ON SEGNET SEMANTIC MODEL IMPROVED BY GENETIC ALGORITHM SegNet model is an improved model of Full Convolutional Networks (FCN). Its encoder, i.e. image feature extraction, is still a convolutional neural network (CNN). Aiming at the problem that most traditional CNN training uses error back propagation algorithm (BP algorithm), which has slow convergence speed and is easy to fall into local optimum solution, this paper takes SegNet as the research object, and proposes a method of extracting partial weights by using genetic algorithm (GA) to select features of SegNet model, and to alleviate the problem that SegNet is easy to fall into local optimal solution. In the training process of SegNet model, the weight of convolution layer of SegNet model used to extract features is optimized through selection, crossover and mutation of genetic algorithm, and then the improved SegNet semantic model (GA-SegNet model) is obtained by GA. In order to verify the image classification effect of the proposed GA-SegNet model, the same high-resolution remote sensing image data are used for experiments, and the model is compared with maximum likelihood (ML), support vector machine (SVM), traditional CNN and SegNet semantic model without GA improvement. The experimental results show that the proposed GA-SegNet model has the best classification accuracy and effect, which GA overcomes the problem of premature convergence of BP random gradient descent to a certain extent, and improves the classification performance of SegNet semantic model. INTRODUCTION In recent years, with the rapid development of remote sensing technology, the spatial resolution of remote sensing image is getting higher and higher, and the spatial information of image is also more rich and detailed. At the same time, the correlation between information in high-resolution remote sensing images is more complex, and the key to extract the information of remote sensing images is to select a reasonable and effective image classification method. High-resolution remote sensing images have rich texture features and obvious geometric structure. Designing a more efficient and accurate classification model of remote sensing images can quickly and accurately grasp the number and distribution of various types of objects in the region. It has important practical significance for environmental protection, urbanization construction and sustainable development. At present, the commonly used classification algorithms in the field of remote sensing image classification are maximum likelihood (ML) (Milas et al., 2017, Peng et al., 2018, ISO clustering (Tao et al., 2018), support vector machine (SVM) (Wang et al., 2017, random forest (RF) (Liu et al., 2012, Shi et al., 2018, neural network (NN) algorithms (Nogueira et al., 2016, Ma et al., 2018. However, these commonly used methods basically rely on the spectral characteristics of images for classification. When the spectral characteristics of remote sensing images are low, the classification effect of these methods will be greatly affected. At this stage, deep learning algorithms have shown immeasurable potential in image classification , Ma et al., 2016, target detection (Druzhkov andKustikova, 2016, Han et al., 2018), speech recognition (Dai, 2017). Since image semantics segmentation algorithm has strong spatial feature extraction ability, an increasing number of scholars have applied it to remote sensing image classification, such as Markov random field algorithm in document (Nishii, 2003, Yang et al., 2013, Bayesian algorithm in document (Bruzzone, 2000, Li andYin, 2013), and conditional random field algorithm in document (Zhong et al., 2011, Zhong et al., 2014, Guo et al., 2016. However, these traditional segmentation methods have a large amount of parameters, and the efficiency of image segmentation is relatively low. Literature (Long et al., 2014) proposes a classical full convolutional networks (FCN) based on semantic segmentation network. Classification based on FCN remote sensing imagery is an end-to-end architecture. The network can restore image size by up-sampling, which can not only recognize the category of pixels, but also restore the original pixels. Pixel-level classification of image is realized by locating in the original image. FCN abandons the full connection layer of traditional convolutional neural network, reduces the parameters of the neural network, reduces the complexity of the network, and improves the efficiency of segmentation. Literature (Chen et al., 2018) constructs a remote sensing image classification framework using FCN, and realizes dense pixelwise classification of high resolution remote sensing images. Literature (Badrinarayanan et al., 2017a) proposed the SegNet semantic segmentation model, the SegNet model is an improved model of FCN, It inherits the idea of FCN image semantic segmentation. The network combines the features of encoder-decoder structure and hopping network to make the model more Accurate output feature maps provide more accurate classification results with limited training samples. Literature (Yang et al., 2019) proposed the use of SegNe semantic model for high-resolution remote sensing imagery rural construction land extraction, and finally formed a better classification model. However, whether it is the traditional FCN or the SegNet semantic model, the encoder image extraction part is CNN. The most commonly used model training method of CNN is the back propagation algorithm. The core idea of this algorithm is random gradient descent, slow convergence speed, easy to fall into local optimal solution, which affects the image segmentation efficiency of the model. Designing a semantics segmentation model that can make the weight parameters converge globally effectively is still the research focus of using semantics segmentation model to classify high-resolution remote sensing images at present. Therefore, in view of the above problems, and considering that genetic algorithm can inspire adaptive global search, this paper proposes to use genetic algorithm to select the weight of convolution layer in the encoder part of SegNet Semantic Model, in order to improve the BP algorithm convergence speed is slow, easy to fall into the problem of local optimal solution, and then improve the classification efficiency and accuracy of high-resolution remote sensing images. EXPERIMENTAL DATA SET The data set used in this experiment is a high-resolution remote sensing image of a region in southern China in 2015. It contains five large-scale RGB remote sensing images with spatial resolution of sub-meter and size ranging from 3000×3000 pixels to 6000×6000 pixels. Four types of objects are marked in the image, namely water body (Mark 1), vegetation (Mark 2), building (Mark 3) and road (Mark 4). Among them, grassland, cultivated land and woodland are classified as vegetation. Because the memory of computer is limited and the size of remote sensing image varies, the image can not be directly input into the neural network for training. Therefore, firstly, the remote sensing image is cut randomly, and the x and y coordinates are generated randomly on the image. Then, the 256×256 pixels small image is cut out under the coordinates. Thus, a small part of the training samples whose width and height are all 256 pixels are obtained. Figure 1 shows two original maps and corresponding label maps in the data set. Because there are fewer data for network training and verification, the data is enhanced, and the corresponding enhancement function is written using the OpenCV library. The enhancement function can rotate the original image and the label image by 90 degrees, 180 degrees, and 270 degrees, and Perform mirroring along the y-axis, then blur the original image, adjust the illumination, and increase the noise operation. After data enhancement operation, a large training set of 15,000 256×256 pixels pictures were obtained as input data for model training. SegNet Model SegNet semantic segmentation model follows the design idea of FCN and improves FCN. The difference between them lies in the different technologies used in the encoder and decoder parts of the network structure. The whole framework of SegNet is shown in Figure 2 (Badrinarayanan et al., 2017b). The left half of the framework is the convolution feature extraction part, which enlarges the local receptive field and reduces the size of the picture by pooling operation. This process becomes Encoder. The Encoder part uses the first 13-layer convolution network of VGG16; The right half is the deconvolution and upsampling operation. The features of the classified image are reproduced by the deconvolution operation. The input image is restored to the original size by upsampling. The process is Decoder. Finally, through the Softmax layer, the maximum probability of different categories is output, and the final image segmentation image is obtained. Figure 2. SegNet framework The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-3/W10, 2020 International Conference on Geomatics in the Big Data Era (ICGBD), 15-17 November 2019, Guilin, Guangxi, China Compared with FCN, SegNet semantic model has been improved in pooling and upsampling operations. Pooling in CNN can reduce the size of a picture by half. Pooling usually operates in two ways: max-pooling ( Figure 3) and meanpooling. Unlike the traditional pooling operation, the pooling operation in SegNet has an additional index function, that is, after each pooling operation, the corresponding position of the weight selected by max-pooling operation in 2×2 filter is saved, such as the number "5" in figure 3, index starting from 0, the position of "5" in red 2×2 filter is (1, 1), and the index corresponding to blue "3" is (0, 0). Upsampling operation is the inverse process of pooling operation. Upsampling operation can increase the size of the image. SegNet model uses index information to directly put data back to the corresponding location in the operation of upsampling, and then convolution layer training and learning. In addition to occupying some storage space, this upsampling operation does not need training and learning. In contrast, FCN uses deconvolution strategy, i.e., feature is deconvoluted to get upsampling after deconvolution operation. This process requires training and learning. Figure 4 illustrates the difference between FCN and SegNet semantic models in upsampling operations (Badrinarayanan et al., 2017b). Genetic Algorithm GA is a random search algorithm based on natural selection and natural mechanism of biology. It is very suitable for dealing with complex and non-linear optimization problems which are difficult to be solved by traditional search algorithms. The genetic algorithm starts from the initial solution generated randomly, and generates new solution by iterating through certain selection, crossover and mutation operations step by step. The following are the concrete steps of the algorithm implementation: 1. Population initialization. Because the genetic algorithm can not directly deal with the parameters of the problem space, it is necessary to transform the feasible solution of the problem into chromosomes in the genetic space by coding. Common coding methods include bit string coding, Grey coding, real number coding, etc. 2. Fitness calculation. Fitness function is a criterion for judging the quality of individuals in a population. The fitness function is the only basis for natural selection. The fitness function is usually transformed by the objective function. 3. Select operation. Selection operation chooses good individuals from the old population with certain probability to form a new population, and further generates the next generation population. Individual fitness value is related to the probability of individual being selected. The larger the fitness value of individual is, the higher the probability of being selected is, and the probability of individual i being selected is as follows: where F i = the fitness value of individual i N = the number of individual population 4. Cross operation. Crossover operation refers to the random selection of two individuals from a population and the exchange of two chromosomes to transfer the excellent genes of the father generation to the offspring, thus producing new excellent individuals. The crossmanipulation method of the k chromosome r k and l chromosome r l at the j position is as follows: ( (1 ) (1 ) where a max = the upper bound of gene a ij a min = the lower bound of gene a ij r 2 = a random number g = the current iteration number G max = the largest evolution number r = the random number of [0,1] interval Design of GA-SegNet Model The GA-SegNet model is implemented in Python language. Figure 5 introduces the main technical process of using genetic algorithm to improve the semantics segmentation model SegNet. Input ( The main design idea of the model GA-SegNet is to first randomly generate chromosomes corresponding to the convolution layer weights of the SegNet encoder part of the semantic model using the genetic algorithm. Each chromosome gene contains the total number of weights, and then the randomly generated chromosomal genes are used as the weight is assigned to the corresponding encoder convolution layer, the input training and verification samples are trained on the SegNet model, and the validation accuracy (acc) of the final output sample is used as the fitness function of genetic algorithm. The fitness value is used to evaluate the chromosome quality of genetic algorithm. Finally, the population is updated iteratively by genetic operation selection, crossover and mutation, and the new population is calculated. The fitness of the semantics model SegNet encoder leaves a chromosome with high fitness in the iteration, and then gets the optimal weight of the convolution layer of the semantics model SegNet encoder. The key point of GA-SegNet model design is how to determine the number of chromosome nodes, and convert it into a weight matrix consistent with the form of convolution layer to give the network convolution layer. Finally, the fitness function of genetic algorithm is selected. In order to determine the number of nodes (variables) of each chromosome in genetic algorithm, the weight matrix of each convolution layer in SegNet coding part of the semantic model is obtained, and the number of elements of the weight matrix is calculated. The total weight number of all convolution layers, i.e. the number of nodes of chromosomes is obtained. After the population initialization of the genetic algorithm is performed, the gene string of each chromosome in the population is sliced, the number of segment nodes of the slice is equal to the weight of each convolution layer, and is set to the convolution layer weight matrix. In a completely consistent form, the matrix obtained by gene conversion is directly assigned to the convolutional layer of the encoder to train the network. As for the fitness function determination of GA, the accuracy of the final output of the validated samples in the training process of the model can directly explain the selection of the weight parameters of the whole network and indirectly evaluate the merits and demerits of each chromosome gene, so the accuracy of the final output of the validated samples in the network is regarded as the fitness function of GA. Training and Validation Sample Organization In order to input sample size smoothly into the semantic model SegNet and ensure the best training effect, and maximize the classification accuracy of the model, the cutting size is 256×256 pixels to ensure the accuracy of classification. Finally, when the model reads the data set, the size of the validation set selected is 25% of the training set. Finally, the model forms 11,250 training samples with 256×256 pixel size and 3,750 validation samples with 256×256 pixel size. Model Parameter Setting Before training GA-SegNet model, some basic parameters of the model need to be set. Learning rate is used to control the global learning rate of the model. Excessive and low learning rate will lead to slow divergence and convergence of the model, respectively. In this experiment, the initial learning rate is set to 0.01; momentum can be used to control the global learning rate. Accelerating the convergence speed of the model is set to 0.8; the learning rate change index (gamma) determines the acceleration of learning rate and sets it to 0.1; the weight decay value (weight decay) and the stepsize evaluation rate (stepsize) are respectively set to 0.0004, 1500; considering the batch sizes that computer memory can withstand. After testing with different batch sizes, the final settings are 4, and the number of model epochs is 15. In the basic parameter setting of GA, the number of population is set to 60, the number of genetic iterations is set to 5, the probability of genetic crossover is set to 0.8, and the probability of variation is set to 0.01. In terms of parameter setting of comparative experiments, the ML, SVM, traditional CNN and model SegNet were used to classify high-resolution remote sensing images. When using the SVM for classification, the maximum number of examples for each category is set to 500; in the image classification using the traditional CNN of the two layers of the convolutional pooling layer and the unmodified semantic model SegNet, the settings of learning rate, gamma, momentum value, weight decay value, stepsize frequency and iteration times are the same as that of GA-SegNet model. Experimental Results After the training of GA-SegNet model is completed, the GA-SegNet.h5 file of the model is saved, which contains the structure and weight parameters of the model. The test remote sensing image is input into the trained model for prediction and classification. At the same time, the same test is carried out using ML, SVM, traditional CNN and the SegNet model. The remote sensing image is classified and predicted. Figure 6 is the result of remote sensing image classification of the method and the comparison method in this paper. Accuracy Analysis Accuracy evaluation of classification results is an important work in the process of remote sensing image classification. Usually, confusion matrix, classification accuracy of various categories, overall classification accuracy (OA), Kappa coefficient are used as the evaluation index of image classification accuracy. In order to verify the validity of GA-SegNet model, and based on the real image and classification result of high resolution remote sensing images, 1000 random points are generated on the real image using ArcMap, and the confusion matrices of classification results between the proposed method and the comparison method are calculated respectively (Tables 1-5). According to the confusion matrix of each classification result, the corresponding classification accuracy of single object category, overall classification accuracy and Kappa coefficient can be obtained. The accuracy comparison of classification results is shown in Through the analysis of the data in Table 6, it can be seen that the classification accuracy of water body has increased from 0.7764 of ML to 0.9228 of this method, and the accuracy has increased by 0.2164. The classification accuracy of vegetation, buildings and roads has increased by 0.0952, 0.1344 and 0.2235 respectively from ML to this method. In terms of OA, the accuracy of the method in this paper and the comparison method is more than 82%. Compared with the ML and SVM the overall classification accuracy of the method in this paper is significantly improved. Compared with the 90.40% and 92.60% of the traditional CNN method and SegNet semantic model, it is also slightly improved, reaching 95.20%. In terms of Kappa coefficients, the Kappa coefficients of the comparison methods are below 0.9. The Kappa coefficients of the ML and the SVM are below 0.8. The traditional CNN method and SegNet semantic model are 0.8564 and 0.8889, respectively. The highest Kappa coefficients of this method is 0.9279. Whether from the single category classification accuracy, OA or Kappa coefficient, it can be seen that the method in this paper is superior to the other four comparison methods, showing better classification ability. CONCLUSION Aiming at the problem that the error back propagation algorithm used in SegNet semantic model encoder is easy to fall into local optimal solution, this paper proposes to introduce genetic algorithm into the weight optimization of SegNet semantic model encoder, so that the original SegNet model has the advantage of global convergence of genetic algorithm. According to the experiments of high-resolution remote sensing image classification, compared with the ML, SVM, traditional CNN and SegNet semantics segmentation model, the GA-SegNet model presented in this paper shows the best classification effect and accuracy in single category classification accuracy, OA and Kappa coefficient. To some extent, genetic algorithm overcomes the problem that BP random gradient descent easily falls into premature convergence, and improves the classification performance of SegNet semantic model. In the future, the work will focus on how to set the optimal basic parameters of genetic algorithm to play its maximum role.
Blocking Techniques for Sparse Matrix Multiplication on Tensor Accelerators Tensor accelerators have gained popularity because they provide a cheap and efficient solution for speeding up computational-expensive tasks in Deep Learning and, more recently, in other Scientific Computing applications. However, since their features are specifically designed for tensor algebra (typically dense matrix-product), it is commonly assumed that they are not suitable for applications with sparse data. To challenge this viewpoint, we discuss methods and present solutions for accelerating sparse matrix multiplication on such architectures. In particular, we present a 1-dimensional blocking algorithm with theoretical guarantees on the density, which builds dense blocks from arbitrary sparse matrices. Experimental results show that, even for unstructured and highly-sparse matrices, our block-based solution which exploits Nvidia Tensor Cores is faster than its sparse counterpart. We observed significant speed-ups of up to two orders of magnitude on real-world sparse matrices. INTRODUCTION W ITH the advent of Deep Learning (DL) as the mainstream methodology for extracting new knowledge from massive amounts of data, a plethora of special-purpose architectures have been developed for accelerating the training and inference phases of Deep Neural Networks (DNN). Most popular examples include accelerators like the Tensor Processing Unit (TPU) [1] or the Intelligent Processing Unit (IPU) [2] and compute units like the Nvidia Tensor Cores (TCs) [3] among others [4]. Tensor accelerators target the most expensive operation in DNNs, namely the multiply-and-accumulate operation, and specialize in the parallel multiplication of large batches of small dense matrices. Recently, their design and capabilities also enable the acceleration of several fundamental primitives of traditional scientific applications [5], [6], [7], offering challenges and opportunities from the theoretical viewpoint [8], [9]. However, a broad class of applications, including DNNs themselves, Graph Neural Networks and Graph Analytics, is characterized by sparse input domains, resulting in highly-irregular computations and memory access patterns. This limits the scalability of parallel algorithms with low arithmetic intensity, such as sparse matrix multiplication (SpMM), even on architectures that offer high-bandwidth memories like Graphics Processing Units (GPU) [10]. From the architecture design perspective, there are examples of highly-specialized hardware that natively perform sparse multiplication [11], [12], but most of them come with several limitations regarding the data-layout (e.g. they assume squared matrices only) or the sparsity pattern (e.g. structured sparsity only [3]). This limits their adoption and diffusion [13]. For all these reasons, high-performance SpMM solutions tend to be extremely customized, over-fitted to the problem, data and hardware at hand [14], [15]. This results, inevitably, in a lack of performance portability and usability. The diffusion and cost-effectiveness of tensor accelerators, as well as the availability of libraries for dense matrix multiplication, suggest investigating how to exploit such architectures to accelerate SpMM. Intuitively, solutions to this problem would cluster the nonzero elements of the matrix and feed them to compute units specialized for the dense product. However, there are many possible ways to implement such a "compression". One approach would consist of explicitly storing the coordinates of non-zero elements of the matrix into external ad-hoc data-structures, performing the dense multiplication and then storing back the results [16]. An alternative approach consists in looking for dense, or almost dense substructures of the input matrix. This allows re-using higher-level routines that are already optimized for dense multiplication, on top of the tensor accelerator. Unfortunately, dense substructures in sparse matrices are often hidden by the arbitrary labelling of rows and columns. Here, blocking algorithms try to solve this by permuting and grouping along the dimensions of the matrix. Most existing blocking algorithms focus on symmetric matrices and symmetric reorderings, treating the matrix as the adjacency matrix of a graph to reduce the number of iterations of numerical solvers [17], [18], [19]. A straightforward application of such methods work poorly on matrices with arbitrary shapes (e.g. sparse layers in DNNs) and sparsity patterns (e.g. real-world graphs). Furthermore, they only partially control the size and internal sparsity of the blocks, failing to consistently improve the performance of SpMM on parallel architectures [20]. To address these challenges, in this paper we present a method to build blocks of tunable size with theoretical guarantees on the density. Specifically, we designed a 1dimensional reordering algorithm that overcomes the aforementioned limitations. We discuss under what conditions (block dimensions and density) this approach can efficiently arXiv:2202.05868v1 [cs.DC] 11 Feb 2022 exploit tensor accelerators for spMM. Finally, we present evidence that, for a wide class of sparse matrices, the recent advancements in dense accelerators make it convenient to use traditional dense-specific routines instead of sparsespecific ones. Our contributions can be summarized as follows: • We present 1-SA, a 1-dimensional blocking algorithm that can decompose a general sparse matrix in dense blocks, with control over the size and density of blocks. • We provide theoretical guarantees on the density of the reordered matrix produced by the 1-dimensional blocking algorithm. We also asymptotically analyze the running time of a sparse-dense matrix multiplication when using a reordered sparse matrix, by using the TCU computational model for Tensor Cores [9], [21]. • We analyze the efficacy of 1-SA on a large synthetic dataset and we provide experimental evidence that even for very sparse matrices, using a block-based SpMM routine on tensor accelerators is more efficient than using a sparse-specific one in the Nvidia hardware/software stack. Specifically, we observe significant performance improvement (from 3x up to 100x) on real-world sparse matrices when using cuBLAS compared to cuSPARSE. The manuscript is organized as follows. In Section 2, we provide the background and notation used for presenting the methodology and the algorithms discussed in Section 3. Experimental evaluation of both the blocking algorithm and its effect on multiplication performance is addressed in Section 4. Finally, we provide an overview of other relevant works in Section 5 and we draw conclusions in Section 6. BACKGROUND The development of new sparse multiplication kernels is tightly related to sparse storage formats used for representing the data. Dense storage formats allow for good spatial and temporal locality when multiplying, and are especially effective for high-performance parallel multiplication, but have an obvious downside -the explicit storing (and processing) of all the zero entries. For very sparse matrices, these can be several orders of magnitude more than the nonzeros. In most practical situations, the additional storage space and multiplication workload eat away any benefit coming from the data regularity. Efficient sparse storage formats provide ways to store only, or mostly, nonzero entries. The most popular, general purpose storage format is the Compressed Sparse Rows (CSR) format. CSR stores only the nonzero entries, row by row, in a contiguous array, and uses two additional arrays to keep track of their positions. However, the flexible structure of CSR makes it hard to optimize spMM on the GPU, due to low spatial and temporal locality and unbalanced workloads [22]. A great deal of research has been focused on mitigating these issues, either by mapping a CSR multiplication efficiently on the GPU [22], [23] or by devising modified versions of the CSR format, such as CSR5 [24]. Blocked storage formats achieve a middle ground between sparse and dense storage formats. These formats, such as the blocked CSR (BCSR) [25], are a popular choice for spMM on the GPU, because they reduce the indexing overhead and improve data reuse for suitable matrices [14], but they pose unique challenges. Fill-in (zeros stored as nonzeros) is an inevitable consequence of most types of blocking which increases their memory and computational footprint, particularly on unstructured sparse matrices. Flexible storage formats such as Unaligned BCSR (UBCSR) [25] may reduce fill-in, but come with the costs of a heavier indexing structures. In this work, we will employ the Variable Block Row (VBR) [26] format, a variant of BCSR which allows for blocks of variable size. To further reduce fill-in, we will make use of reordering and blocking techniques that rearrange the nonzero entries to expose dense substructures within the matrix. Similarity-based blocking Our analysis starts from an algorithm developed by Saad [19] for the preconditioning of iterative solvers, in turn based on the seminal work of Ashcraft [27]. Ashcraft's algorithm, briefly exemplified in Algorithm 1, uses hashing to identify (and group together) identical rows (and columns) in a symmetric matrix. Then, by reordering the rows and columns according to the resulting partition, it clusters the nonzeros in perfectly dense blocks. Saad's version further compresses the symmetric matrix by also grouping rows and columns which are not identical, but just similar. Thus, it creates blocks which are only approximately dense. Unfortunately this approach, while effective in creating big, almost dense blocks, is not directly applicable to rectangular or asymmetric matrices. In this paper, we will consider instead 1-dimensional blocking algorithms, that is, those that only reorder one dimension (e.g., the rows) of the matrix leaving the other dimension untouched. The benefit of 1dimensional blocking is that, when multiplying our sparse matrix by a dense one, we can leave the original order of the latter intact. This saves us the cost of reordering the dense matrix at run-time. In general, 1-dimensional reorderings allows us to keep any predetermined convenient ordering and grouping of the columns intact. In Section 3 we will detail 1-SA, a modification of Saad's algorithm that reorders one dimension of a rectangular sparse matrix, and forces blocks of fixed size on the other dimension. We start by discussing the algorithm from Saad [19]. We will then show how to extend Saad's algorithm to the case of 1-dimensional blocking. Saad's algorithm (SA) takes as input a symmetric matrix A and a similarity parameter τ . The first step in SA is a graph compression technique due to Ashcraft [27]. This technique amounts to hash each row so that identical rows are binned together. Ashcraft propose the simple but effective hash function where v is a row and i and j are respectively row and column indices. Eq. (1) will hash identical rows to the same value. After checking for hash collisions, partitioning identical rows (and columns) of A together will expose perfectly dense blocks in A. The hash function in (1) is quite prone to collisions, but it has the advantage of being very fast to compute. As a matter of fact, the time spent checking for collisions is negligible compared to other steps of the procedure. Moreover, collisions can be reduced by taking in account the size of the rows in addition to their hash. Algorithm 1 Hash-based compression Input: A CSR matrix. Output: a mapping from rows to groups for j = i + 1, . . . N do 9: if h(v j ) = h(v i ) then break 10: else 11: if group(v j ) = −1 then 12: if pattern(v i ) = pattern(v j ) then 13: group(v j ) = i; 14: end if 15: end if 16: end if 17: end for 18: end for In some matrices an implicit block structure exists but is not perfect, meaning that many nodes are similar but not identical. Uncovering these imperfect substructures may allow to find larger blocks with a moderate loss in density. SA measures the similarity of nodes with cosine similarity, namely Other similarity measures may be easily substituted in place of Eq. 2. In our algorithm, we will use Jaccard similarity instead, which provides us with better theoretical bounds. The jaccard similarity of two sets A, B (in this case, the sets of nonzero indices in two rows) is defined as: After compressing the graph with Ashcraft's method, SA compares the (compressed) nodes against each other, and merges them if their cosine similarity exceeds τ . SA groups rows and columns in the same way to generate a blocking of the symmetric matrix. SA can be readily adapted to rectangular matrices, as briefly mentioned in Saad's paper [19]. It suffices to drop the assumption of symmetry, and just compress and compare the rows (or the columns) of the matrix using Eq. (1) and Eq. (2). Ater the rows have been blocked, the columns may be then partitioned (e.g., uniformly) to create rectangular blocks. Unfortunately, this naive 1-dimensional implementation of SA is not nearly as effective as its symmetric counterpart, as it fails to group together elements which are close but not exactly on the same column. This limitation is addressed by our algorithm, 1-SA, which first partition the columns and then reorders the rows, thus accounting for entries with close indices when comparing rows. Evaluating the quality of blockings To evaluate the effectiveness of a blocking algorithm, two aspects must be considered: the size of the blocks created, and their density. Coarser blockings reduce indexing and promote regular data access, but also introduce more fillin, i.e. treating zero elements as non-zeros. On the other hand, finer blockings are usually denser at the cost of less data locality. In general, a trade-off exists between the size and density of the blocks. Ultimately the target task -in our case, spMM -will determine which balance between the two properties is the most desirable. In general, a good blocking algorithm should be able to explore the densitysize trade-off to attain a satisfying density for a target block size. Both SA and 1-SA are equipped with a parameter τ to explore this trade-off. Varying τ allows us to tune the density of the blocks, and indirectly, their size. For this reason, we will not evaluate our algorithm on single-shot blockings, but on entire blocking curves -that is, we will observe how different choices of τ produce blockings with different combinations of block size and in-block density. Since we will explore 1-dimensional reordering algorithms, only the height of blocks will be allowed to change, while the width remains fixed by a predetermined partition of the columns. Figure 1 shows an example of a reordering curve for a direct 1-dimensional implementation of SA, where several synthetic blocked matrices have been scrambled and then blocked again. For all but the denser matrices, SA failed to recover the original blocking. As we will see in section 4.3, these results can be improved by tweaking SA to account for the predetermined column partition. Blocked matrices The j-th element of row v i (usually 0 or 1). v A row projected on the column partition. Width of the column partition ∆ H Height of the blocks. δ Overall matrix density. θ Fraction of nonzero blocks. ρ The overall density inside nonzero blocks ρ , ∆ H As above, but after blocking Our reordering algorithm, as detailed in Algorithm 2 τ Similarity threshold in SA and 1-SA The hash for vector v, evaluated as in Eq. 1 METHODOLOGY AND ALGORITHMS In this section, we present our one-dimensional modification of SA, called 1-SA. Its objective is to find a partition of the rows that clusters the nonzero entries of the matrix into blocks. The basic idea is to partition the columns in advance, and then reformulate the hash function (1) and the similarity function (3) to work on the "quotient" rows determined by the column partition. The advantage of this approach is that it is able to pair rows whose elements belong to spatially close columns. Moreover, it can control exactly the ordering and blocking of the second dimension by choosing the columns partition, which allows the blocking to be better tuned to the multiplication task at hand. We present a detailed analysis of our algorithm, along with some variations thereof, in section 3.1. The reordering and blocking algorithm We now detail how to extend SA to obtain a 1-dimensional blocking algorithm that fits our needs. We will refer to this algorithm as 1-SA. First, a partition Q for the columns is chosen. From now on we will consider the case of a regular partition with blocks of width ∆ W . The choice of ∆ W can be guided by knowledge about either the hardware or the matrix block structure. In the multiplication experiments of Section 4.4, we will always select ∆ W to be a multiple of 64, for the benefit of the Tensor Cores-based multiplication routine. Given a column partition Q = Q 1 ....Q K , we consider the Q−quotient version of a row v i , namely a K-dimensional binary vectorv i such that That is, the k-th element ofv is nonzero if and only if v has at least a nonzero in the positions corresponding to Q k . We then apply algorithm 1 to the quotient rows: for each quotient rowv (as in 4) a hash number is calculated as follows: Note that h(·) can be replaced by any hash function with stronger theoretical guarantees. After checking for collisions, identical (quotient) rows are grouped together. Note that two rows with different nonzero patterns may be grouped together, as long as their quotient pattern is the same. From now on, the rows have been compressed. A compressed row may represent any number of identical (quotient) rows. As in SA, the compression step is not necessary, but it will speed-up the rest of the computations. Without loss of generality, from now on we will assume the rows of our CSR matrix to represent any number of identical rows. After compressing identical rows, we create blocks by merging similar rows, in a fashion similar to SA. We sketch our blocking algorithm in Algorithm 2. The main loop in lines 4-18 starts by creating a group from the first not yet merged row (lines 7-8). Then, through the loop in lines 9-16, we iteratively try to merge compressed rows to the current group. As in SA, we check (line 11) if the similarity of the row with the current group exceeds a certain threshold τ . Unlike SA, we evaluate a more general merge condition before deciding whether to merge a row to the current group. In Section 3.2 we show that some theoretical bounds on reorder quality can be enforced if the merging criterion involves a minimum Jaccard similarity with an additional control on how a group of rows can grow. Finally, when a row is merged into a group (line 12), we update the block-pattern of the group to the bitwise-OR of the two merged patterns (line 13). The motivation behind this choice is that, from the moment a nonzero element is added to a group, all zeros in the corresponding group of columns will be treated as nonzeros. This loop continues until each row has been assigned to a group. The output of the algorithm is a partition of the rows in groups. Together with the columns partition Q, this determines a blocking of the matrix. Converting the matrix to the VBR formats using these rows and columns partitions allows for saving only the nonzero blocks of the matrix during the rest of the procedure. We provide a visual representation of 1-SA in Fig 2. In summary, we propose the following modifications to obtain 1-SA from SA: • a column partitioning, which allows the algorithm to recognize rows with spatially close entries; • a pattern update after each merge, which allows for better similarity evaluations at each step; • a two-steps merging criterion, which ensures theoretical bounds on the final density. Reordering with theoretical bounds Merging rows while the similarity between the pattern and the new row is at least τ often suffices for constructing sufficiently dense blocks from an experimental point of view. However, there can be pathological matrices that can generate sparse blocks with densities arbitrarily close to 0. Consider the following example. Let > 0 be an arbitrary value and let τ ≥ 0.5 the Jaccard similarity threshold. Consider the following set of The current group pattern is thus set to the projection of the first row. Subsequent rows will be compared to this pattern, and merged if they satisfy the merging criterion. In (b), the first suitable row is discovered. It is merged to the current group, and the pattern is updated accordingly. The algorithm continues until all rows have been evaluated and possibly merged (c). Then (d) a new group is created from the next merged row, and process repeats. Note that already grouped rows (green highlight) will not be evaluated again, and will create a row of blocks in the final matrix. This fact is visualized by partially reordering the rows in (d). In practice, the rows will be only reordered when all groups have been created. and zeros otherwise; row v +j for j ∈ [0, 1/4 ) has nonzero entries in the first j columns and zero otherwise. By merging rows in the order v 0 , . . . v + 1/4 −1 , we get that the similarity is 1 up to row v −1 and then it is in the range [0.5, 1). For τ ≥ 0.5, we then get a block of size ( + 1/4 ) × ( 1/4 ) with + 1/4 ( 1/4 + 1)/2 nonzero entries: the density is then Θ 1/ 1/4 . We now propose a stronger merging condition with theoretical guarantees on the density of the output blocks. Consider a group of rows with pattern p and let v 0 be the first row added to the group; then a new row v is added to the group if: 1) The similarity between the new row v and the pattern p is above the threshold τ ; 2) The number λ of nonzeros in the new pattern for j = i + 1, . . . , N do 10: if group[j] = −1 then 11: if MergeCondition(thisPattern,v j ) then We get the following result for Jaccard similarity. Theorem 1. Let G be a group of rows v 0 , . . . v h−1 , merged by the algorithm in this order using the aforementioned merging condition with Jaccard similarity threshold τ . Let λ 0 be the number of nonzeros in v 0 , and let λ be the total number of nonzero in the final pattern. If λ ≤ λ 0 /(1 − τ /2), then the density ρ G of G after removing empty columns is at least τ /(2∆ W ). Proof. We initially assume ∆ W = 1. Consider a group G of rows after the reordering, and let ρ G be its density after removing all empty columns. For the sake of notational simplicity, let us assume that it consists of h rows v 0 , . . . v h−1 , added by the algorithm in this order. For 0 ≤ i < h, let p i be the pattern after adding row v i , that is p i is the elementwise OR of v 0 , . . . v i ; we also let V i and P i represent the set containing column indexes of nonzero entries in v i and p i (for convenience, we set P −1 = ∅). For i ≥ 0, we define The set Λ i contains the column indexes that are added in G by row v i : for each k ∈ Λ i , we have that column k contains only zero entries in row v j for any j < i. On the other hand, the set Γ i contains the column indexes corresponding to zero entries in v i , but for which there exists at least a row in v 0 , . . . v i−1 with a nonzero entry. We define γ i = |Γ i | and λ i = |Λ i |, and Λ = P h−1 (i.e., the set of all columns in G with at least a nonzero entry). Note that We observe that the zero entries in row v i must be either in empty columns (which are removed in the density estimate), in columns in Γ i (i.e., the columns with nonzero entries in the preceding i − 1 rows, but not in v i ) or in Λ \ P i (i.e., the columns with nonzero entries in the rows that will be added by subsequent rows but not in the first i rows). Then, the total number Z of zeros in G after removing all empty columns is: By construction, we add row v i if Jaccard(P i−1 , V i ) ≥ τ and hence Since the total number of entries in G is hλ, the density ρ G is then For the case ∆ W > 1, we perform the previous analysis on the quotient rows. Since each nonzero entry in the compressed matrix corresponds to at least one nonzero and at most ∆ W − 1 zeros, we get the claimed result. Cost of 1-SA We consider a sparse matrix stored in CSR format. Without loss of generality, we ignore the actual values of the entries and only focus on the structure of the nonzero elements (i.e., we replace the nonzero values with 1). First, we analyze Algorithm 2 ignoring the pattern update (step 13). The running time of this simplified algorithm is O(N 2 k) for a N × N sparse matrix with, at most, k nonzero elements per row. The worst case happens when no two rows are merged, so that N (N − 1)/2 row comparisons have to be carried on. For both Jaccard and cosine similarity, If the pattern update (step 13) is used, the pattern may grow during a single pass through steps 4-17. When a row v j is merged to the current pattern, the latter grows at most by k j entries, so that the cost of the remaining comparisons becomes (N −j)(k +k j ) +k j+1 +...+k N assuming no other row is merged. Yet, since row v j has now been merged, the algorithm will not try to build a group out of it (see step 5), which would have a cost proportional to (N −j)(k j )+k j+1 + ... + k N . This is exactly the additional cost incurred by the pattern update. We conclude that switching on the pattern update does not influence the worst-case time analysis. Cost of matrix multiplication with reordering In this section, we provide an upper bound on the cost of multiplying a sparse matrix reordered with the strategy in Section 3.2 with a dense matrix. For the analysis, we use the (m, )-TCU model studied in [9], [21], where m and are two parameters characterizing the tensor. Specifically, the (m, )-TCU model is a traditional RAM model featuring a tensor core unit that can natively compute the matrix multiplication between two dense √ m × √ m matrices in time O (m + ), where is a latency cost. The multiplication between a r × c matrix and a c × s matrix requires time O rcs/m 1/2 + cs /m on a (m, )-TCU. Consider a N × N matrix A with K nonzero entries. Assume that the reordering described in the previous Section 3.2 gives H groups and let the i-th group G i be a r i × c i matrix. Let B be a dense N × N matrix. We now provide an upper bound on the running time for computing A · B using the (m, )-TCU: we multiply each group G i with B by using the aforementioned dense matrix multiplication algorithm [9]. For simplicity, we assume ∆ W = 1. Proof. A constant fraction of the H blocks have more than √ m rows and, by Theorem 1, their density is at least τ /2 after removing the empty columns. We can then pad the remaining blocks to reach √ m rows and density at least τ /2 without asymptotically increasing the running time. We then assume that for all blocks r i ≥ √ m and the density is at least τ /2. Consider a block G i given by the reordering with r i rows and c i non empty columns. By representing the group as a dense r i × c i matrix and by extracting the corresponding submatrix B of size c i × N in O (c i n) time, we can compute the product between the G i and the dense matrix in O r i c i N/m 1/2 + c i N /m time. Since the density of G i is at least τ /2, we get K i /(r i c i ) ≥ τ /2 where K i is the number of nonzeros in G i . It then follows that r i c i ≤ 2K i /τ and that Therefore, the multiplication between A and B can be computed in We observe that, when = O (m) and τ = Θ (1), we can compute A·B in time O KN/m 1/2 , which is a factor m 1/2 faster than the trivial algorithm. EXPERIMENTS One of the goal of this study is to evaluate the quality of our reordering and blocking algorithm. Since it is computationally infeasible to compute the optimal blocking for a generic sparse matrix (even of modest size), in Section 4.3 we generate synthetic matrices with a known blocking structure, scramble the order of their rows, and evaluate the ability of 1-SA to retrieve the original blocking. With this method, we study the effectiveness of 1-SA across the landscape of blocked matrices, evaluating it on a wide range of sparsity patterns. Then, in Section 4.4, we study the performance improvements when multiplying blocked matrices with dense ones. To this end, we evaluate the performance of a blocked multiplication kernel built using routines implemented in cuBLAS library, and compare it with the performance of the sparse-specific multiplication routine from cuSPARSE. We also validate our methodology on synthetic RMATs [28] and on real-world sparse matrices from the Network Repository [29] collection. The code is publicly available on the github repository at the following url: https://github.com/LACSFUB/SPARTA. git Dataset description and generation For the experiments of section 4.3 we generated synthetic blocked sparse matrices with varying sparsity pattern. A matrix A(∆, θ, ρ) is generated as follows: first, we divide the matrix in ∆ × ∆ blocks. Then, we randomly select a fraction θ of these blocks that we flag as nonzero. Finally, for each nonzero block, we select a fraction ρ of entries to be nonzero. We note that, following this procedure, both the number of nonzero blocks in a blocked row and the number of nonzero entries in a row are not fixed within the same matrix. The parameters used to generate the matrices are detailed in Table 2. For the experiments of Section 4.4, we used synthetic RMATs with generating parameters (0.57, 0.19, 0.19, 0.05). Their size and average degree are detailed in Table 2. We also evaluated our methods on real-world matrices from the Network Repository collection. Their characteristics are described in Table 3. Experimental Setup We ran all experiments in Section 4.4 on a Nvidia V100 GPU with 32GB of memory. We used CUDA 11.4.0 and the corresponding versions of cuBLAS and cuSPARSE. Synthetic block matrices -Blocking curves We evaluated the efficacy of our blocking algorithm on synthetic block-sparse matrices. A block-sparse matrix A(∆, θ, ρ) is generated by fixing the height and width of blocks ∆ W = ∆ H = ∆, the fraction of nonzero blocks θ and the in-block density ρ, as detailed in 4.1. A reordering experiment on the synthetic block matrix A is performed as follows: first, the rows of the matrix are scrambled. Then, the columns are partitioned uniformly with width ∆. Finally, for a given τ , the blocking algorithm is run, producing the reordered and blocked matrix A τ (the subscript will be dropped from now on). We investigate the blocking curves (see Section 2.2) produced by varying τ in our algorithm. In these curves, we observe the trade-off between the in-block density ρ against the block height ∆ H . Note that, since blocks in A are uneven in size and number of elements, ρ is to be intended as the overall density of nonzero elements in the nonzero area of A , and ∆ H as the average height of nonzero blocks in A . The reordering curves in Figure 3 show how our blocking algorithm can explore the trade-off between ∆ H and ρ . Each point corresponds to a different value of τ , and we evaluate the density ρ at ∆ H ≈ ∆ to gauge the ability of our algorithm to recover the original blocking. In the rest of the curve, we observe how using lower (or higher) τ can trade in-block density to increase the block-height (or vice-versa). size (x-axis) and in-block density (y-axis). The 8192 × 8192 matrices are divided in blocks of size 64 × 64, with 10% randomly chosen nonzero blocks. Matrices differ by in-block density (see the legend). Notice how 1-SA can recover the original blocking (red cross) for the three denser matrices, and obtain around half of the optimal density for the sparser ones. Synthetic block matrices -landscape To better gauge the overall efficacy of our blocking algorithms, we tested it extensively over the landscape of synthetic blocked sparse matrices. That is, we generated blocked sparse matrices from a range of θ, ρ and ∆ values, we scrambled the order of their rows, and then we blocked them with τ = [0.1, 0.2, ..., 1] . We then selected a blocking with ∆ H ≈ ∆, and recorded the relative in-block density ρ ρ . Similarly, we recorded the block height ∆ H at ρ ≈ ρ. These values provide a measure of how good the algorithm recovered the original blocking. A perfect recover occurs when ρ = ρ and ∆ H = ∆. The main findings of these experiments are reported in Figure 4, showing how even unstructured and sparse matrices (high θ, low ρ) can be blocked effectively by 1-SA. Figure 5 shows the effectiveness of blocking using the naive implementation of SA (that is, without using the projected rows, the hierarchical merging, and the merge limit). As can be readily observed in the images, SA is unable to capture the block structure as effectively as our algorithm. This is because SA cannot recognize that two entries belong to the same block unless they share the same column index. The multiplication routine We have implemented a simple, block-based, sparse-bydense multiplication routine for matrices stored in the VBR format by resorting to the cuBLAS library. The routine uses cuBLAS streams to process rows of blocks in parallel and uses the cuBLAS gemm routine to multiply the blocks using tensor cores. For the sake of brevity, we will refer to this block-based multiplication as VBR-cuBLAS. We note that our method is not restricted to the cuBLAS library, and can be readily adapted to any high-level parallel multiplication routine for dense matrices. As shown in the following experiments this routine, applied to the blocking generated by 1-SA, achieves considerable speed-ups against the cuSPARSE baseline. 1-SA vs cuSPARSE In this experiment, we determine for which multiplication instances it is convenient to employ a blocking-based multiplication instead of a sparse-based one. We benchmarked the block-based cuBLAS multiplication routine on matrices that were blocked with 1-SA, and compared its performance with that of the cuSPARSE spmm routine. In a fashion similar to the experiments of Section 4.3, and as detailed in Section 4.1, we generated a landscape of blocked matrices varying their θ, ρ and ∆ values. We scrambled the order of their rows, and then reordered them using 1-SA. After that, we took a reordering with ∆ H ≈ ∆ as a representative for that landscape point, and we multiplied it with a dense matrix using both VBR-cuBLAS and cuSPARSE. The results, summarized in Figure 6, confirm that there is a wide class of matrices for which reordering and blocking consistently speed up the multiplication compared to the sparse-based baseline. We note that we are able to obtain a speed-up for matrices with overall density as small as 1 10.000 . We observe that the performance improvement grows with the size of the dense matrix being multiplied. This behaviour is explained by noting that, for wider matrices, the computational intensity of the problem increases. This makes the reduced indexing and better data locality of our multiplication routine more effective in reducing the computation time. Blocking and multiplication of RMATs To better assess the ability of our methodology of speeding up multiplication, we generated RMATs as detailed in Section 4.1. After blocking them with 1-SA, we compared the performance of VBR-cuBLAS and cuSPARSE when multiplying with a dense matrix. In Figure 7 it can be seen that, for all considered column partition sizes ∆ W , our methodology is effective in reducing the multiplication time. The performance gain grows with the matrix density, and we observe a 4x speed-up for the most dense RMAT. We note that, while using bigger ∆ W improves performance, the related returns quickly diminish. This is to be expected, since bigger blocks entail higher fill-in. Blocking and multiplication of real-world matrices Finally, we considered the multiplication of real-world sparse matrices from the Network Repository [29] database. Details of the employed graphs are provided in Section 4.1. Matriced have been blocked with 1-SA, then multiplied with a dense matrix (N = 4096) using both VBR-cuBLAS and cuSPARSE. As for the RMATs of the previous section, Figure 8 shows that, for all considered column partition sizes ∆ W , our methodology is effective in reducing the multiplication time. RELATED WORKS The idea of reordering sparse matrices to find their substructures or reorganize their data has been extensively explored, and several algorithms exist to do so with different purposes and efficacy [17][27] [19] [30][31] [32]. One of the first, and most common use of reordering algorithms is as preconditioners for iterative solvers [33]. Typical reordering algorithms in this class are the Reversed Cuthill-Mckee (RCM) [34] and the Approximate Minimum Degree (AMD) [30]. One consequence of this origin is that most reordering methods work on structurally symmetric matrices, and produce symmetric reorderings since the latter will not change the diagonal entries and the eigenvalues of a matrix. Apart from preventing the use of these algorithms on asymmetric or rectangular sparse matrices, such as those found in pruned neural networks, these assumptions also exclude asymmetric reorderings, which could provide better blocking. While many algorithms were developed with the purpose of blocking, such as the popular PABLO [18] and its variants, they would only consider diagonal blocking. Ashcraft's [27] and Saad's [19] compression methods, the starting point of our research, are an exception in this regard in that they also promote the creation of off-diagonal blocks. Another important class of reordering and blocking algorithms comes from graph partitioning problems. In this regard, popular algorithms are nested dissection algorithms [35], multilevel algorithms [36], greedy methods [37], spectral partitioning, and metaheuristics such as simulated annealing or genetic algorithms. Partitioning algorithms can also be extended to treat asymmetric and rectangular matrices, by considering instead bipartite graphs or hypergraphs [38]. While these algorithms are more focused on pure blocking than the ones coming from preconditioning, they focus on minimizing the cut produced by the partition, that is, the total weight of edges connecting partitions. This means that they will produce too an approximately optimal diagonal blocking, but not necessarily a good blocking for block-based multiplication in general. To the best of our knowledge, the work closer to our own is described by Zachariadis et al. [16]. They considered the problem of accelerating sparse-sparse matrix multiplication on Nvidia Tensor Cores. In their work, they divided (without reordering) the matrices in tiles, and multiply only nonzero tiles by using auxiliary data structures on the fly, with the consequence of increasing load-and-store operations. Under such a light, their techniques can also be integrated with 1-SA to further increase the density inside of hyper-sparse blocks. In general, their approach could be integrated with ours to treat efficiently even sparser matrices than the one we considered in this paper. Other flavours of blocking and reordering have also been investigated to accelerate parallel spMM. Pichel et al. [20] reordered several matrices with different reordering algorithms and tested their GPU spMV performance in the CSR, HYB, ELLPACK and BELLPACK formats. They found that most of the time, a reordering would exist that considerably improve performance. They provided some insights on which reordering works best for which format, stating for example that BELLPACK benefits from "distancebased" reorderings and is harmed by bandwidth-reduction algorithms. Yet, in that work, they neither provided a way to find a good reordering for a given matrix nor explored the trade-off between block size and block sparsity. In previous work, Pichel et al. developed a framework to reorder a sparse matrix in order to cluster entries in the same row [32]; their approach uses an approximated algorithm for the Travelling Salesman Problem (TSP) to maximize locality, i.e., the number of nonzero entries appearing in consecutive locations. In this regard, their approach is similar to Saad's. Pinar et al. [39] also considered a problem close to our own. They defined a storage format based on 1D horizontal blocks (BCRS) to perform spMV on a CPU. Then, they set themselves to reorder the columns of a matrix to maximize such blocking. Having reduced this problem to a TSP, they used TSP heuristics to solve it. Interestingly, they employed very small blocks (1x2 or 1x3) and reported no benefit from using larger blocks. Hong et al. observed [40] the benefit of identifying clustered entries for spMM on the GPU, devising a hybrid data structure (CSR + row-blocked CSR) that treats sepa- rately blocked and non-blocked entries. In a later work [23], they transposed this approach directly to the multiplication phase, avoiding the creation of a non-standard storage format and relying instead on the classic CSR. To improve blocking, they used an ordering strategy (but not a renumbering) that, within a tile, clusters together columns that exceed a certain density. However, their tiles are only used to organize the multiplication, and they still use the same kernel to multiply both sparse and dense tiles. CONCLUSION We presented a 1-dimensional blocking algorithm able to decompose a sparse matrix in dense blocks, with tunable block size and guaranteed block density. The resulting dense blocks can be easily processed by tensor accelerators and used to speed up the sparse-by-dense matrix multiplication on these architectures. We focused on the Nvidia CUDA hardware/software stack and showed evidence that our approach is several times faster than the sparse-specific cuSPARSE routine. We evaluated our methodology in the popular CUDA framework, but we note that it does not require specific libraries or architectures to be implemented. We suggest that this methodology can be employed in all situations where a highly efficient, block-based matrix multiplication routine or architecture exists to bypass the need for low-level sparsespecific solutions. As a final remark, we did not study in this work the cost of blocking, operating under the assumption that it is discounted over many multiplications. However, we report that our current naive serial implementation of 1-SA takes roughly the same time to reorder the matrix as cuBLAS and cuSPARSE take to multiply it. In future works, we expect that optimizing and possibly parallelizing 1-SA will make run-time blocking convenient even for a single multiplication instance.
How Vulnerable Are Automatic Fake News Detection Methods to Adversarial Attacks? As the spread of false information on the internet has increased dramatically in recent years, more and more attention is being paid to automated fake news detection. Some fake news detection methods are already quite successful. Nevertheless, there are still many vulnerabilities in the detection algorithms. The reason for this is that fake news publishers can structure and formulate their texts in such a way that a detection algorithm does not expose this text as fake news. This paper shows that it is possible to automatically attack state-of-the-art models that have been trained to detect Fake News, making these vulnerable. For this purpose, corresponding models were first trained based on a dataset. Then, using Text-Attack, an attempt was made to manipulate the trained models in such a way that previously correctly identified fake news was classified as true news. The results show that it is possible to automatically bypass Fake News detection mechanisms, leading to implications concerning existing policy initiatives. Introduction The spreading of disinformation throughout the web has become a serious problem for a democratic society. The dissemination of fake news has become a profitable business and a common practice among politicians and content producers. A recent study entitled "Regulating disinformation with artificial intelligence" (Marsden and Meyer, 2019), examines the trade-offs involved in using automated technology to limit the spread of disinformation online. Based on this study, this paper discusses the social and technical problems of Automatic Content Moderation (ACM) poses to freedom of expression. Although AI and Natural Language Generation have evolved tremendously in the last decade, there are still concerns regarding the potential implications of automatically using AI to moderate content. One problem is that automatic moderation of content on social networks will accelerate a race in which AI will be created to counter-attack AI. Adversarial machine learning is an technique that attempts to fool models by exploiting vulnerabilities and compromising the results. For example, by changing particular words -e.g., from "Barack" (Obama) to "b4r4ck" -it is possible to mislead classifiers and overpass automatic detection filters. Recent works show the state-ofthe-art machine learning models are vulnerable to these attacks. This study relies on state-of-the-art techniques to attack and dive deep into the fake news detection vulnerabilities. The goal is to experiment with adversarial attacks to discover and compute the vulnerabilities of fake news classifiers. Therefore, this work aims to answer the following research question: How vulnerable is fake news detection to adversarial attacks? The remainder of this paper is organized as follows. Section 2 discusses the background and previous related works. Section 4 describes the design of our experiments, and Section 5 presents the results and discussion. Section 6 summarizes our conclusions and presents future research directions. 2 Related Work 2.1 Fake News Detection Fake news are used to manipulate general opinions of readers about a certain topic (Zhou and Zafarani, 2019). Unlike typical "clickbait" articles, which use misleading and eye-catching headlines, fake news are usually quite long and wordy, consisting of inaccurate or invented plots (Chakraborty et al., 2016). This gives rise to the assumption of a wellresearched and factually correct article. The reader, thus, does not notice how their personal opinion about a certain topic is deliberately manipulated. Fake news detection refers to any kind of identification of such fake news. Due to the speed at which digital news is produced today, effective, automated fake news detection requires the use of machine learning tools. Previous research has mainly focused on fake news in social media and fake news in online news articles (Ghanem et al., 2021). There are various models of the machine detection of fake news, which are based on different heuristics. For example, Ghanem et al. (Ghanem et al., 2021) discuss the effectiveness of the "FakeFlow" model, which incorporates both word embedding and affective information such as emotions, moods or hyperbolic words, based on four different datasets. The model receives several small text segments as input instead of an entire article. The result of the study was that this model is more effective than most state-of-the-art models. It generated similar results with less resources. Another study (Woloszyn et al., 2021) investigates the possibility of using artificial intelligence for the automatic generation of Claim-Review. Claim-Review is the web markup introduced in 2015 that allows search engines to access factchecked articles. The basic idea of the so-called "fact check" is that journalists and fact checkers identify misinformation and prevent it from spreading. Accordingly, it is important that fact-checked articles are highlighted and shared by users. Furthermore, research is currently looking at when and why a news article is identified as fake news and when it is not (Shu et al., 2019). This research on "explainable fake news detection" aims to improve the detection performance of the algorithms. For this purpose, both news content and user comments are used as data input. Adversarial Attack Adversarial attacks are part of adversarial machine learning, which has become increasingly important in the field of applied artificial intelligence in recent years. In an adversarial attack, the input data of a neural network is intentionally manipulated to test how resistant it is to deliver the same outputs. These manipulated input data is called "adversarial examples". (Zhixuan Zhou and Hsu, 2021) Such a neural network is described as a "fake news detector" in the context of this paper. The reason for research in the field of adversarial attacks is that more and more attempts are being made to outsmart fake news detectors on the internet. This can be done, for example, by changing the spelling of a word so that the word remains easily interpretable for humans but not for an algorithm (from "Barack" to "B4r4ck"). For this reason, the input data is always manipulated in such a way that a human hardly notices any differences from the non-manipulated input data. In the area of object recognition, for example, only individual pixels are slightly changed, which the human eye can hardly perceive. Examples of adversarial examples in the field of fake news detection are: • Fact distortion: Here, some words are changed or exaggerated. It can be about people, time or places. • Subject-object-exchange: By exchanging subject and object in a sentence, the reader is confused as to who is the executor and who is the recipient. • Cause confounding: Here, either false causal connections are made between events or certain passages of a text are omitted. Up to now, vulnerabilities to adversarial attacks have been identified in all application areas of neural networks. Especially in the context of increasingly safety-critical tasks for neural networks (e.g., autonomous driving), methods for detecting false or distorted input data are becoming more and more relevant. (Morris et al., 2020). Regulating disinformation with artificial intelligence According to the study "Regulating disinformation with artificial intelligence" (Marsden and Meyer, 2019), disinformation is defined as 'false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit. This definition is based on the definition of "Final report of the High Level Expert Group on Fake News and Online Disinformation" (Comission, 2018), which additionally specifies that the term "disinformation" does not include fundamentally illegal content such as hate speech or incitement to violence. Nor does it include misinformation that is clearly not misleading, such as satire or parody. As a further delimitation, the paper defines the term "misinformation", which refers to any misinformation that is unintentionally or accidentally false or inaccurate. Marsden and Meyer explain the causes of disinformation in the online context and the responses that have been formulated from a technological perspective. Furthermore they analyze the impact of AI-disinformation initiatives on freedom of expression, media pluralism and democracy. The issue of disinformation is a very long-term historical problem with human society. It has just got a global effect through automation, through the internet and new technologies. Even in the past, people faced the challenges of filtering out false information or illegal content from newly emerging media such as newspapers, radio or television. So, on the one hand there is currently a desire within the European Commission to take action against illegal and unwanted content through online intermediaries. On the other hand, the intervention and regulation of content on the internet is also seen critically, because this is against the basic concept of the internet with its freedom of expression. However, to stop the global spread of disinformation, a restriction of freedom of expression is necessary. Accordingly, for measures to filter fake news, there are three principles that must exist when restricting freedom of expression (Marsden and Meyer, 2019): • Measures must be established by law • Measures must be legitimate and shown to be necessary • Measures must be the least restrictive method of pursuing the goal There are two different ways to detect and remove disinformation: The human and the technical moderation by AI. Over time, AI solutions have become increasingly effective in detecting and removing illegal or unwanted content, but they also raise the question of who is the judge in deciding what is legal or illegal and what is wanted or unwanted in society. The problem here is that neither law nor technology can be truly neutral. Both reflect the values and priorities of those who designed them. This principal is called "garbage in -garbage out". According to expert opinions, AI can help to make an initial filtering, especially for texts and articles, which is then checked by humans. If AI is wrong, there is always the possibility to reverse the decision. To do this, companies also hire cheap subcontractors in remote countries to remove content. Policy initiatives in the past have focused on making internet intermediaries more responsible for reducing disinformation on their platforms. While actual content creators were responsible for their content in the past, now the platforms have to take more and more responsibility for the content of the individual actors. The reactions of intermediary sites such as Facebook and YouTube are technological initiatives that identify certain content and remove it in different ways or do not publish it at all. The three most popular initiatives are filtering, blocking, deprioritisation: Filtering is the most effective method. There are 2 different types of filtering: ex-ante and expost. This means filtering before a content goes online and filtering after it has already been published. With the exception of obviously illegal content (e.g. child abuse images or terrorist content) or disruptive content (e.g. spam, viruses), the ex post-removal is preferable. The reason for this is that there is better legal justification for this. One example of filtering is YouTube Content ID. With YouTube Content ID, uploaded files are matched against databases of works provided by copyright holders. If a match is found, copyright owners can decide whether to block, monetise or track a video containing their work. Blocking is probably the most widespread method. It is used by users, email providers, search engines, social media platforms and network and internet access providers. Similar to filtering, blocking can take place ex-ante or ex-post, i.e. after knowledge, request or order. Blocking means that the content is not completely removed but blocks certain users from accessing the content. For this reason, it provides one significant advantage: Content can be blocked depending on the provider's terms of use or the laws of a particular region. For example, content may be blocked in certain places but may be available in others, e.g. if some countries allow certain content by law while others do not. The last option is deprioritisation. In the context of disinformation, deprioritisation means that content is de-emphasized in users' feeds. This takes place when correct content from certain providers is displayed side by side with incorrect content. Then the wrong information is identified and deprioritised so that it is displayed further down and not so prominently anymore. Data Set The data used in this paper was collected by a project called Untrue.News: A search engine designed to find fake stories on the internet (Woloszyn et al., 2020). It uses an open-source web crawler for searching fake news. This web crawler is connected to a natural language processing pipeline that uses automatic and semi-automatic strategies for data enrichment. These can be found in (Tchechmedjiev et al., 2019) and are less than those found in the schema.org/ClaimReview markup. For performing the text attacks, we only used the TRUE and FALSE statements. In Table 2 classification results can be found for the models trained with all four categories. Experiment Design To understand the vulnerability of models trained to detect fake news we split our experiment into two steps. First training state-of-the-art machine learning models using the dataset described in Section 3 and second applying adverserial attacks on the dataset using TextAttack to manipulate the trained models to classify fake news as TRUE news. Fake News Detection as a Classification Problem Two types of classifiers were used. Bin a binary classifier, in which a positive class represents true news, and the negative not being true news and Mult a multiclass classifier, where each document is classified as either being true news, fake news, untrue news or other. Pre-trained Models In this subsection, we present the applied pretrained models and BERT (Devlin et al., 2019) which two of the models are based on. In this context a token describes a single word of a given text and a segments describes a sequence of tokens. The trained models will later be attacked by the Text Attack recipes described in Section 4.4. BERT The BERT (Bidirectional Encoder Representations from Transformers) model was originally trained on two datasets: BookCoorpus (Zhu et al., 2015) and English Wikipedia. The way BERT analyzes the text is by taking concatenations of two segments. The total length of the concatenation is bound by a parameter T and is computed as one input with particular tokens in between describing the beginning and end of the sentence, as well as the separation point of the two segments. During pretraining, BERT applies a Masked Language Model (MLM) and a Next Sentence Prediction (NSP). For MLM BERT selects 12% of input tokens and replaces them with a [MASK] token and another 1.5% with a random vocabulary token. It then proceeds with predicting the selected [MASK] tokens. With NSP BERT makes a binary prediction on whether two segments in a text are adjacent. RoBERTa The pre-trained RoBERTa (Robustly optimized BERT approach) is an optimized version of the original BERT model. The authors use a total length of concatenation T = 512 (longer than previously used) and expand the used datasets to: BookCoorpus, CC-News (Nagel, 2016), OpenWebText (Gokaslan and Cohen, 2019) and Stories (Trinh and Le, 2019). Furthermore, RoBERTa trains using a bigger batch size, removes NSP and changes the the masking for MLM each time a new input is given to the model therefore avoiding constant masking during different training epochs. BERTweet (Nguyen et al., 2020) is a pre-trained model that uses elements from both BERT and RoBERTa. For instance, it's architecture is taken from BERT, while the pre-training procedure is "copied" from RoBERTa. The main difference is the used dataset: the only used dataset consists of 850 English tweets together accumulating 80GB of memory. Flair Embeddings The main difference of Flair Embeddings (Akbik et al., 2018) to other models is their capturing of words which considers token as a sequence of characters. Furthermore, Flair Embeddings, also called contextual string embeddings, take contextual words into consideration, i.e. words that would appear consequently or previously. A word would therefore be embedded depending on the sequence of words. For our training we use the embeddings news-forward and news-backward, which were both trained on a 1 billion word corpus. Parameterization The training of each model runs for E = 15 Epochs each, since at the small amount of training samples all models start converging after approximately 7-8 epochs. The mini batch size is set to b = 32. For BERTweet and RoBERTa we specify the learning rate to lr = 3 − e5, while for Flair Embeddings to lr = 0.1. Finally, for Flair Embeddings we also set the anneal factor (describing the factor by which the learning rate is annealed) to a f = 0.5 and the patience (the number of epochs without improvement until the learning rate would be annealed) to p = 5. tp is the number of positive instances correctly classified as positive, tn number of negative instances correctly classified as negative, f p negative instances wrongly classified as positive, and f n is the number of positive instances wrongly classified as negative. We defined positive instances as fake news websites and negative instances as reliable news websites. Text Attack Recipes The following attack recipes from TextAttack (Morris et al., 2020) were applied on the dataset (described in Section 3) to manipulate the three introduced models to classify FALSE statements as TRUE, therefore misinterpreting fake news as true news. • DeepWordBug: Generates small text perturbations in a black-box setting. It uses different types of character swaps (swapping, sub-stituting, deleting and inserting) with greedy replace-1 scoring (Gao et al., 2018). • TextBugger: These attacks were optimized to perform with real world applications. They use space insertions, character deletion and swapping. Additionally they substitute characters with similar looking letters (ex. o with 0) and replace words with top nearest neighbor in context-aware word vector space (Li et al., 2019). • PSOZang: Word level attack using a sememebased word substitution strategy as well as particle swarm optimization (Zang et al., 2020). • PWWSRen2019: These attacks focus on maintaining lexical correctness, grammatical correctness as well as semantic similarity by using synonym swap. Words for swap are prioritized on a combination of there saliency score and maximum word-swap effectiveness (Ren et al., 2019). • TextFoolerJin2019: Word swap with their 50 closest embedding nearest neighbors. Optimized on BERT . • BAEGarg2019: Uses a BERT masked language model transformation. It uses the language model for token replacement to best fit overall context (Garg and Ramakrishnan, 2020). • CheckList2020: Inspired by the principles of behavioral testing. Uses changes in names, numbers and locations as well as contraction and extension (Ribeiro et al., 2020). • InputReductionFeng: This attack concentrates on the least important words in a sentence. It iteratively removes the world with the lowest importance value until the model changes its prediction. The importance is measured by looking at the change in confidence of the original prediction when removing the word from the original sentence (Feng et al., 2018) With exception of InputReductionFeng each recipe has three possible results for the attack on each sentence. Success means the text attack resulted in a wrong classification. Skipped mean that the model classified the sentence wrongly to begin with, therefore the sentence doesn't need to be manipulated. Fail means the model still classified the sentence correctly. InputReductionFeng uses Maximized to indicate that the model uncertainty was maximized. A rubbish example is classified as correct with a higher accuracy than the original valid input. Skipped is used when the model classified the sentence wrongly to begin with, therefore the sentence doesn't need to be manipulated. Results and Discussion The results discussed in this section can also be found in our Github repository (Schneider, 2021) including the respective implementations. Training the Models The first step of our experiment is to train stateof-the-art machine learning models to detect fake news. Since our dataset includes classes beyond TRUE and FALSE, we trained RoBERTa, BERTweet and Flair Embeddings both as binary and multi class classifier. Binary Classification For the binary classification RoBERTa, BERTweet and FlairEmbeddings were only trained on TRUE and FALSE statements. Their precision, recall and F1-scores, depicted in Table 1, demonstrate that the BERT based models performed best with scores of > 80%. Overall, BERTweet had the best results, however, the difference to RoBERTa is minimal even though the models were pre-trained on entirely different datasets. With an F1-score of 70% FlairEmbeddings received the worst score which could be traced back to the small size of the dataset. Multiclass Classification We also trained all three models on the four existing classes. Due to the higher classification complexity all validation scores (shown in Table 2) resulted about 10-20% worse than for the binary classification. It is worth to mention, that RoBERTa performed slightly better than BERTweet, contrary The scores indicate the Precision, the Recall and the F1-Score to the less complex classification in Section 5.1.1. FlairEmbeddings' scores were affected the most and dropped to 50%. Adversarial Attacks The next step is to apply adverserial attacks on the dataset using TextAttack. Approximately 40 FALSE statements were sampled from the dataset and attacked using the TextAttack recipes. Due to the poor multiclass classification performance we applied all attacks on the binary-trained models. The results can be found in Table 3 for BERTweet, in Table 4 for RoBERTa and in Table 5 for Flair. The tables contain the percentage of sentences predicted as Successful, Failed or Skipped. Table 6 shows the percentages obtained for InputReductionFeng for all three models and the mean value of score improvement over all sentences for each model. To see the distribution of word level and character level attacks, Table 7 was generated. It contains the mean value of the Success percentage that can be found in tables 3, 4 and 5. In comparison to BERT-based models Flair clas-sified a lot more inputs wrongly, which is depicted in the higher percentage of skipped statements in BERTweet seems to be the most vulnerable to InputReductionFeng attacks but the difference in score increasing is not very high (+-0.01). Overall it seems that these attacks show similar results over all models. Recipe Success Level Table 8 shows that the 4 top ranking attacks are word level attacks. It seems that the models are more vulnerable to word level attacks than to character level or mixed attacks (character & word). Finally the Success Rate, for every model was calculated using formula 1: The skipped statements were not taken into the calculation, as they are dependent on the model training and not on TextAttack. The skipped values show which statements were not predicted correctly by the model in the first place. Thus, they are depended on the model and reflect models accuracy. For the success rates we wanted to see TextAttack's efficiency on statements that the model would correctly classify as fake news. Our results for the Success Rates underline the results seen in tables 3, 4 and 5, showing that Flair is less vulnerable to the attacks with S r =54,77% than the other two models with S r = ∼70%. As an conclusion this gives us an Total Success Rate of 65,15% for adversarial attacks on fake news detection using TextAttack. Conclusion This paper aimed to answers the question: how vulnerable are automatic fake news detection to adversarial attacks? We tested this by checking if automated augmentation of fake news sentences (FALSE statements) will lead to TRUE classifications. This would allow them to bypass the fake news detection mechanisms. Our results show that using the python library TextAttack allows automated changing of classification for 65,15% of the sentences. Flair, the only model using wordlevel embedding (contextual string embeddings), seems less vulnerable to attacks with a Success Rate of 54,77%. The other two models using document embeddings show 72,45% (BERTweet) and 68,22% (Roberta). Furthermore, word-level swaps seem to be slightly more successful with an average of 76,87% compared to character level or mixed swaps with average 55,21%. Consequently, the models are more vulnerable to attacks using semantically correct sentences with changed meaning than to attacks using typos. Overall it seems that it is possible to bypass the classifier with these attacks. However, our results do not consider that a human will be able to see obvious spelling mistakes in sentences. Furthermore, a human will have a higher accuracy of recognizing unfitting words in sentences. Looking at the augmented sentences we think that many of them will be recognized as FALSE (TrOmp hsut down American airportA on 4 Jul 201B or Hollywood Action Star Christelle Chan Dead). As a conclusion of these results we think that using the policy initiatives blocking and deprioritization, should be avoided if possible, as these methods don't completely remove the fake news statements from the users feeds. This makes it easier to target these statements with automated attacks as shown in our research. A scenario would be to attack these sentences until they are unblocked or re-prioritized in a users feed. This would lead to dangerous spreading of Fake News in social networks.
Intra-Individual Variability of Human Dental Pulp Stem Cell Features Isolated from the Same Donor It is primarily important to define the standard features and factors that affect dental pulp stem cells (DPSCs) for their broader use in tissue engineering. This study aimed to verify whether DPSCs isolated from various teeth extracted from the same donor exhibit intra-individual variability and what the consequences are for their differentiation potential. The heterogeneity determination was based on studying the proliferative capacity, viability, expression of phenotypic markers, and relative length of telomere chromosomes. The study included 14 teeth (6 molars and 8 premolars) from six different individuals ages 12 to 16. We did not observe any significant intra-individual variability in DPSC size, proliferation rate, viability, or relative telomere length change within lineages isolated from different teeth but the same donor. The minor non-significant variances in phenotype were probably mainly because DPSC cell lines comprised heterogeneous groups of undifferentiated cells independent of the donor. The other variances were seen in DPSC lineages isolated from the same donor, but the teeth were in different stages of root development. We also did not observe any changes in the ability of cells to differentiate into mature cell lines—chondrocytes, osteocytes, and adipocytes. This study is the first to analyze the heterogeneity of DPSC dependent on a donor. One DPSC source is the dental pulp of permanent teeth. The dental pulp is the soft connective tissue of the tooth containing four layers. The external layer (odontoblast layer) is made up of odontoblasts producing dentin; the second layer (cell-free zone) is poor in cells and rich in collagen fibers; the third layer (cell-rich zone) contains fibroblasts and 2 of 15 dental pulp stem cells. From this layer, undifferentiated stem cells migrate to various districts where they can differentiate under different stimuli and make new differentiated cells and tissues, such as a "dentin-like tissue" (a reparative or tertiary dentin). The innermost layer (core of the pulp) comprises blood vessels and nerves that enter the tooth mostly through the apical foramen. Other cells in the pulp include fibrocytes, macrophages, and lymphocytes [25]. The dentin and enamel act as barriers, separating the pulp tissues from external differential stimuli; pulp environment structures called niches help keep the DPSCs in their primitive nature. Another advantage that DPSCs have over other adult mesenchymal stem cells is the ease of harvesting. The most frequently used source of DPSCs is the dental pulp from the extracted third molars [26]. Therefore, the isolation of stem cells for future treatment can be viewed as a part of a planned extraction procedure performed under local anesthesia rather than an unnecessary intervention performed only for stem cell isolating. DPSC isolation is easy, fast, not financially demanding, and safe [10]. However, many questions must be addressed concerning the possible broader use of DPSCs in tissue engineering and regenerative medicine. It is primarily important to define the standard DPSC characterizations and the factors that affect them. The Mesenchymal and Tissue Stem Cell Committee of the International Society for Cellular Therapy proposed minimal criteria to define human MSCs [27]. First, such cells must be plastic-adherent when undergoing culturing in standard cultivation conditions. Second, they must express several clusters of differentiation markers, and third, MSCs must differentiate into osteoblast, adipocytes, and chondroblasts in vitro [27]. Previous research has described various properties of DPSCs in relation to the method of isolation, cultivation [28], and the interindividual, age-dependent variances among donors [29,30]. Some research studies have focused on comparing features of stem cells isolated from various tissue sources, including dental pulp [31][32][33]. None of the published studies have investigated the diversity of DPSCs isolated from different teeth extracted from the same donor. The main question of this study is whether DPSC phenotypes exhibit intra-individual variability (and if so, to what extent) and what role this might play in the differentiation potential. With this in mind, our study seeks to compare the basic phenotypic characteristics of DPSCs (isolated from the same donor and cultured under the same conditions) and to define potential differences concerning their ability to differentiate into osteoblasts, chondroblasts, and adipocytes. Donor and Tooth Overview The study included 14 teeth from six different individuals ages 12 to 16 (Table 1). Teeth isolated from donors 1, 2, 3, 5, and 6 were extracted during one appointment on the same day. Extracted premolars obtained from donor 4 were extracted in two sessions. Teeth from donors 1-5 were extracted under local anesthesia, but molars from donor 6 were extracted under general anesthesia. Laboratory Procedures After approximately seven days of initial seeding, adherent fibroblast-like spindle colonies were observed. Once they reached confluence, passaging of the adherent cells resulted in rapid multiplication. All DPSCs were cultivated up to the 8th passage. Paired lineages of DPSCs isolated from the same donor showed a similar proliferation rate (Figures 1 and 2). A slight non-significant variance was observed in donors 1, 2, and 6. All these extracted teeth were molars. However, lineage 1-A was isolated from tooth 27 and 1-B from tooth 28. The same example was seen in donor 6, where lineage 6-A was isolated from tooth 37 and 6-B from tooth 38. In both cases, the wisdom teeth were in more immature root development than the second molars. Teeth isolated from donors 1, 2, 3, 5, and 6 were extracted during one appointme on the same day. Extracted premolars obtained from donor 4 were extracted in tw sessions. Teeth from donors 1-5 were extracted under local anesthesia, but molars fro donor 6 were extracted under general anesthesia. Laboratory Procedures After approximately seven days of initial seeding, adherent fibroblast-like spind colonies were observed. Once they reached confluence, passaging of the adherent ce resulted in rapid multiplication. All DPSCs were cultivated up to the 8th passage. Pair lineages of DPSCs isolated from the same donor showed a similar proliferation r (Figures 1 and 2). A slight non-significant variance was observed in donors 1, 2, and 6. A these extracted teeth were molars. However, lineage 1-A was isolated from tooth 27 a 1-B from tooth 28. The same example was seen in donor 6, where lineage 6-A was isolat from tooth 37 and 6-B from tooth 38. In both cases, the wisdom teeth were in mo immature root development than the second molars. Cumulative population doublings of all paired DPSCs reached from the primary to the 8th passage. We used the formula PD = log2 (Nx/N1) for calculating the population doublings reached in each passage. Nx is the total passage cell count calculated using the Z2-Counter (Beckman Coulter, Miami, FL, USA), and N1 is the initial cell count seeded into the culture dish (5000 cells/cm 2 ). The statistical significance was calculated using a paired t-test; no comparison was statistically significant. Cumulative population doublings of all paired DPSCs reached from the primary to the 8th passage. We used the formula PD = log 2 (N x /N 1 ) for calculating the population doublings reached in each passage. N x is the total passage cell count calculated using the Z2-Counter (Beckman Coulter, Miami, FL, USA), and N 1 is the initial cell count seeded into the culture dish (5000 cells/cm 2 ). The statistical significance was calculated using a paired t-test; no comparison was statistically significant. During each passaging, we also measured cell diameter. Figure 3 illustrates the average cell size during cell cultivation. The cell viability was carried out using a Vi-Cell analyzer (Beckman Coulter, Miami, FL, USA) in the 2nd and 8th passages. All paired DPSC lineages contained more than 89% viable cells. Lineage 2-B (isolated from tooth 28) had the highest percentage of viable cells in both measurements. However, the cell viability of all paired lineages did not differ statistically (Figure 4). We used the formula PDT = t/n, where t is the number of hours of cultivation per passage and n is the number of PDs in that passage. The statistical significance was calculated using a paired t-test; no comparison was statistically significant. During each passaging, we also measured cell diameter. Figure 3 illustrates average cell size during cell cultivation. We used the formula PDT = t/n, where t is the number of hours of cultivation per passage and n is the number of PDs in that passage. The statistical significance was calculated using a paired t-test; no comparison was statistically significant. We used the formula PDT = t/n, where t is the number of hours of cultivation per passage and n is the number of PDs in that passage. The statistical significance was calculated using a paired t-test; no comparison was statistically significant. During each passaging, we also measured cell diameter. Figure 3 illustrates average cell size during cell cultivation. are presented as a mean, and SD plotted as error bars. The statistical significance was calcu using a paired t-test; no comparison was statistically significant. The cell viability was carried out using a Vi-Cell analyzer (Beckman Coulter, Mi FL, USA) in the 2nd and 8th passages. All paired DPSC lineages contained more than viable cells. Lineage 2-B (isolated from tooth 28) had the highest percentage of viable in both measurements. However, the cell viability of all paired lineages did not d statistically ( Figure 4). The DPSCs are identified by specific marker gene expressions. The minimal criteria set by the International Society for Cellular Therapy assure MSC identity by using CD105, CD73, and CD90, and lack expression of CD45, CD34, CD14, or CD11b, CD79alpha or CD19 and HLA-DR surface molecules [34]. To verify whether there is significant intraindividual variability in phenotype patterns, we performed the flow cytometry analysis in the 3rd and 7th passages of all paired lineages of DPSCs. We did not observe a statistical difference in expressions of any of the analyzed CD markers ( Figure 5). All paired lineages highly (>70%) expressed CD markers for mesenchymal stem cells: CD29, CD44, CD73, CD90, or stromal-associated stem cell markers CD13 and CD166. They showed no (<10%) or low marker gene expressions (<40%) of CD31 (platelet endothelial cell adhesion molecule (PECAM-1), CD34 (a transmembrane phosphoglycoprotein for hematopoietic stem cells), and CD45 (protein tyrosine phosphatase for hematopoietic stem cells). The DPSCs are identified by specific marker gene expressions. The minimal criteria set by the International Society for Cellular Therapy assure MSC identity by using CD105, CD73, and CD90, and lack expression of CD45, CD34, CD14, or CD11b, CD79alpha or CD19 and HLA-DR surface molecules [34]. To verify whether there is significant intra-individual variability in phenotype patterns, we performed the flow cytometry analysis in the 3rd and 7th passages of all paired lineages of DPSCs. We did not observe a statistical difference in expressions of any of the analyzed CD markers ( Figure 5). All paired lineages highly (>70%) expressed CD markers for mesenchymal stem cells: CD29, CD44, CD73, CD90, or stromal-associated stem cell markers CD13 and CD166. They showed no (<10%) or low marker gene expressions (<40%) of CD31 (platelet endothelial cell adhesion molecule (PECAM-1), CD34 (a transmembrane phosphoglycoprotein for hematopoietic stem cells), and CD45 (protein tyrosine phosphatase for hematopoietic stem cells). To determine whether there is a difference in the relative telomere length of DPSCs isolated from different teeth but the same donor, we performed a quantitative PCR assay in different passages (2nd passage and 7th passage). The analysis results are depicted in Figure 6. Although we observed the variance between lineages of donor 2, there was no statistically significant difference among paired lineages. Lineage 2-B prolonged its relative telomere length in the 7th passage in comparison with lineage 2-A. However, the overall trend among all lineages was to shorten the relative telomere length with increasing passage number. This was statistically significant (p = 0.002). Differentiation Assay We did not prove the hypothesis that there are consequences of intra-individual variability in DPSC phenotypes in terms of their differentiation potential. We were successfully able to trigger osteogenesis, chondrogenesis, and adipogenesis in all paired lineages of DPSCs according to external cultivation conditions. To verify the success of differentiation we performed immunocytochemical detection (type II collagen and osteocalcin) and histological staining (blue Masson trichrome, von Kossa staining, and Oil Red-O staining). Results are illustrated in the following figures (Figures 7-11). We chose representative examples of paired lineages (5-A and 5-B). Figure 5. Immunophenotype profile of all paired lineages in the 3rd and 7th passages. Data for the 7th passage are depicted as boxes filled with patterns. The layouts of graphs illustrate percentages of positive cells, determined as the percentage with a fluorescence intensity greater than 99.5% of the negative isotype immunoglobulin control. After normality had been ascertained using a Shapiro-Wilk test or Kolmogorov-Smirnov test, paired lineages were compared using paired t-test for continuous variables, or Wilcoxon matched-pairs test on ranks for nonparametric variables. No variances were shown as statistically significant. for continuous variables, or Wilcoxon matched-pairs test on ranks for nonparametric variables. No variances were shown as statistically significant. To determine whether there is a difference in the relative telomere length of DPSCs isolated from different teeth but the same donor, we performed a quantitative PCR assay in different passages (2nd passage and 7th passage). The analysis results are depicted in Figure 6. Although we observed the variance between lineages of donor 2, there was no statistically significant difference among paired lineages. Lineage 2-B prolonged its relative telomere length in the 7th passage in comparison with lineage 2-A. However, the overall trend among all lineages was to shorten the relative telomere length with increasing passage number. This was statistically significant (p = 0.002). Figure 6. The relative telomere length change measurement between 2nd and 7th passage-T/S ratio. Data are presented as a mean, and SD plotted as error bars. Data for the 7th passage are depicted as boxes filled with patterns. The error bars were calculated for three technical replicates without biological replicates. The statistical significance was calculated using a paired t-test; no comparison was statistically significant. The trend of relative telomere length shortening in the 7th passage measurement was statistically significant compared to the measurement performed in the 2nd passage (p < 0.05). Differentiation Assay We did not prove the hypothesis that there are consequences of intra-individual variability in DPSC phenotypes in terms of their differentiation potential. We were successfully able to trigger osteogenesis, chondrogenesis, and adipogenesis in all paired lineages of DPSCs according to external cultivation conditions. To verify the success of differentiation we performed immunocytochemical detection (type II collagen and osteocalcin) and histological staining (blue Masson trichrome, von Kossa staining, and Oil Red-O staining). Results are illustrated in the following figures (Figures 7-11). We chose representative examples of paired lineages (5-A and 5-B). Discussion Previously published studies have described the basic properties of DPSCs and h they vary depending on the different methods of isolation or cultivation [28,35,36] or donor age [29,30]. There are also studies describing various characterizations of den stem cells isolated from different tissues, including DPSCs [31,32,37]. This study aimed verify whether DPSCs isolated from various teeth extracted from the same donor exh intra-individual variability and what the consequences are for their differentiat potential. Significant intra-individual heterogeneity among DPSCs isolated from vari teeth of the same donor would reveal the potential issues that should be taken i consideration during planning the research methodology. Conversely, the absence intra-individual heterogeneity would simplify laboratory practices. In both cases, Discussion Previously published studies have described the basic properties of DPSCs and how they vary depending on the different methods of isolation or cultivation [28,35,36] or the donor age [29,30]. There are also studies describing various characterizations of dental stem cells isolated from different tissues, including DPSCs [31,32,37]. This study aimed to verify whether DPSCs isolated from various teeth extracted from the same donor exhibit intra-individual variability and what the consequences are for their differentiation potential. Significant intra-individual heterogeneity among DPSCs isolated from various teeth of the same donor would reveal the potential issues that should be taken into consideration during planning the research methodology. Conversely, the absence of intra-individual heterogeneity would simplify laboratory practices. In both cases, the conclusion of our study will have an impact on the standardization of laboratory protocols. Mehrabani et al. published a study showing that DPSCs collected from different teeth showed different properties, especially different proliferation rates [38]. However, it was based on a comparison of dental pulp stem cells from different donors. Although the results of that study are valuable, they are nonetheless limited in relation to the different properties of DPSCs because they were dependent on the type of tooth from which the cells were isolated. Therefore, our study aimed to compare the basic phenotype characteristics of DPSCs isolated from the same donor and under the same cultivation conditions and to define the potential differences in multipotency. First, we wanted to verify whether there is a significant difference in the DPSC phenotypes within paired lineages isolated from the same donor. To answer this question, we compared paired DPSC lineages in proliferation capacity, viability, and phenotype profile typical for mesenchymal stem cells using flow cytometry and determined the relative telomere length change in different passages during cultivation. Second, we studied whether the intra-individual variability in basic DPSC characteristics affected the ability to differentiate into osteocytes, chondrocytes, and adipocytes. The study included 14 teeth from six different individuals ages 12 to 16. The young age of patients resulted from the fact that the most common reason for extraction was an initiation of orthodontic treatment during which more than one tooth was extracted. Regarding the characterization of DPSCs, we did not observe any significant changes in the proliferation capacity as determined by population doubling time (PDT) or cumulative population doublings (PD). The biggest differences were seen within lineages isolated from donors 1 and 6. Lineage 1-A was isolated from tooth 27 and 1-B from tooth 28. The same example was seen in donor 6, where lineage 6-A was isolated from tooth 37 and 6-B from tooth 38. In both cases, the wisdom teeth were in more immature root development than the second molars. We also observed a difference in donor 2, but in this case, both extracted teeth were wisdom teeth. However, lineage 2-B was the only one that did not exhibit the relative telomere length shortening with increasing passaging. Other lineages displayed shortened telomere chromosomes with increasing passaging. Extensive in vitro proliferation of human DPSCs is associated with telomere attrition [39,40]. Regarding the phenotype profile, all paired lineages highly proliferated the mesenchymal stem cells markers and showed no or low positivity for hematopoietic or endothelial markers. However, we found particular differences in expressions of CD markers between cell lineages isolated from the same donor. None of them were statistically significant. We suppose that DPSC cell cultures comprise different types of undifferentiated cells and that this heterogenicity is probably donor-independent. Furthermore, we investigated whether the intra-individual heterogeneity affected the multipotency of isolated lineages. We successfully triggered osteogenic, chondrogenic, and adipogenic differentiation in all paired lineages. We verified our findings using immunocytochemistry to reveal osteocalcin and type II collagen. We also stained samples using histological staining (von Kossa staining, blue Masson trichrome, and Oil Red-O) to detect calcium phosphate deposits, collagen and procollagen, adipose vacuoles, and droplets. We are fully aware of the study limitations. First, we had only two paired lineages isolated from different types of teeth (donor 1 and 6). For a wider range of results, it will be necessary to study DPSCs isolated from different types of teeth extracted from the same donor. However, it seems thus far that the intra-individual variety between DPSCs isolated from the same donor is dependant on the tooth stage development. Fewer variations meant that DPSC cell cultures comprised different types of undifferentiated cells; such heterogenicity was probably independent of the donor. Second, it would be better to quantify the differentiation ability within the lineages isolated from the same donor using the PCR method to evaluate particular differentiation markers. In our study, we only verified their ability to differentiate into three different mature cell lines. In our future studies we would like to analyse paired lineages isolated from the same donor, but different tooth types (premolars vs. molars or other combinations). We also would like to quantify the differentiation ability within the lineages isolated from the same donor. Donors All donors and/or their legal representatives were informed of the purpose of our study and gave informed consent before being included in the study. The Ethical Committee of University Hospital Hradec Kralove approved the study guidelines and informed consent (ref. no. 352 201812 SO7P). Inclusion criteria were the extraction of at least two teeth. Exclusion criteria were carious or periodontally compromised teeth. Common reasons for tooth extraction were recurrent inflammatory complications of soft tissues around semi-impacted third molars, ectopic localization of impacted third molars, or premolar extraction as a part of ongoing orthodontic treatment. All patients were healthy individuals with no history of smoking. DPSCs Isolation Immediately after the extraction, each tooth was cleaned using a sterile gaze to remove the microbial plaque. Afterward, the teeth were decontaminated using 0.2% solution of chlorhexidine gluconate for 30 s. After this step, extracted teeth were placed in tubes with a chilled transportation medium containing 1 mL of Hank's balanced salt solution (HBSS, Invitrogen, Carlsbad, CA, USA), 9 mL water for injection (Bieffe Medital, Grosotto, Italy), and antibiotics and an antifungal agent against any potential contamination, including 200 µL/10 mL streptomycin (Invitrogen), 200 µL/10 mL gentamicin (Invitrogen), 200 µL/10 mL penicillin (Invitrogen), and 50 µL/10 mL amphotericin (Sigma-Aldrich Co., St. Louis, MO, USA). The temperature was kept at 4 • C during transportation to the tissue laboratory at the Department of Histology and Embryology at the College of Medicine in Hradec Kralove. In the laboratory, samples were proceeded inside the culture room in a laminar flow chamber on the same day as the tooth extractions were done. In the beginning, a pulp chamber of teeth was revealed by splitting the tooth in a cement enamel junction using Luer forceps. After opening the pulp chamber, we removed the pulp tissues using a sharp probe and tweezers. After the pulp tissues were retrieved, we minced them using sterile scissors, ground them using a mini-tissue grinder with an isotonic solution (phosphatebuffered saline, PBS, (Sigma-Aldrich), and finally dissociated them enzymatically using 0.05% Trypsin-EDTA (Gibco, London, UK) for 10 min at 37 • C. After this period, the enzymatic digestion was neutralized using a neutralization medium (20% of Alpha-MEM cultivation medium (Gibco) and 80% fetal bovine serum (FBS, PAA Laboratories, Inc., Dartmouth, MA, USA). After centrifugation (600× g, 5 min), the cell pellet was resuspended in a modified cultivation medium Minimum Essential Medium Eagle-Alpha modification: (Alpha-MEM, Gibco) for mesenchymal adult progenitor cells containing 2% fetal bovine serum (FBS, PAA Laboratories) and supplemented with 10 ng/mL epidermal growth factor (PeproTech, London, UK), 10 ng/mL platelet-derived growth factor (PeproTech), 50 nM dexamethasone (Sigma-Aldrich), 0.2 mM L-ascorbic acid (Bieffe Medital) for protection against oxygen radicals, essential amino acid glutamine (Invitrogen) at a final concentration of 2%, and antibiotics-100 U/mL penicillin, 100 µg/mL streptomycin (Invitrogen), 20 µg/mL gentamicin (Invitrogen), and 0.4 µL/mL amphotericin (Sigma-Aldrich). The culture medium was changed every 3 days; prior to this, the tissue culture dish was washed with PBS to remove non-adherent elements and detritus. We kept the cultivation dishes at a temperature of 37 • C and at 5% CO 2 . We reviewed the culture dishes regularly and, after the cells reached 70% confluence, we passaged them using 0.05% Trypsin-EDTA and resuspended them in a final concentration of 5000 cells/cm 2 . All lineages were passaged up to the 8th passage (8p). Laboratory Procedures To find out whether there is significant intra-individual variability, we compared DPSC characteristics based on proliferative capacity, viability, expression of phenotypic markers typical for mesenchymal stem cells, and the relative length of telomere chromosomes in different passages. DPSC Size, Proliferation, and Viability During each passaging, we analyzed DPSC diameters and the total DPSC count using Z2-Counter (Beckman Coulter). The protocol of measurement was according to manufacturer's instructions. After each cell passaging, centrifugation, the cell pellet was resuspended in the 1 mL of the culture medium and 100 µL of cell suspension mixed with 9.9 mL of diluent was used for analysis. The Z2-Counter analyzer is based on the detection and measurement of changes in electrical resistance produced by cells suspended in a conductive liquid (diluent) traversing through a small aperture. Proliferation activity was determined in each passage as cumulative population doublings (PDs) and population doubling time (PDT). The formula for these measurements is described in our previous study [40]. Viability assay was performed using the Trypan Blue Dye Exclusion method in the 2nd and 8th passages. Equal volumes of cell suspension and trypan blue were mixed and automatically analyzed using a Vi-Cell analyzer (Beckman Coulter). Flow Cytometry In order to verify intra-individual variety in the pattern of clusters for differentiation markers, we used immunophenotyping against the following markers: CD10 (CB-CALLA . For flow cytometric analysis, cells were detached and stained sequentially with primary immunofluorescence antibodies conjugated with phycoerythrin (PE) or fluorescein (FITC) against the above-mentioned CD markers before a Cell Lab Quanta analysis (Beckman Coulter). The percentage of positive cells was determined as a percentage of cells with higher fluorescence intensity than the upper 0.5% isotype immunoglobulin control. Classification criteria were as follows: <10% no expression, 10-40% low expression, 40-70% moderate expression, and >70% high expression [41]. Quantitative PCR To determine the relative telomere length, we used a previously described method [39,40]. Briefly, telomere length measurement was performed by qPCR assay. We extracted the DNA of isolated stem cells using a DNeasy Tissue Kit (Hilden, Germany). After the DNA isolation, we calculated its concentration in each sample using a spectrophotometer Nanodrop 1000 (Thermo Fisher Scientific, Waltham, MA, USA). The relative telomere length was calculated using the formula T/S = 2 −∆Ct , where ∆Ct = Ct telomere − Ct" single copy " gene. The single gene (housekeeping gene) was a coding acidic ribosomal phosphoprotein 36B4. We performed the qPCR in 96-well plates, and we analyzed each sample in triplicates at the same well position on an ABI 7500 HT detection system (Applied Biosystems, Foster City, CA, USA). Each 20 µL reaction consisted of 20 ng DNA, 1 × SYBR Green master mix (Applied Biosystems), 200 nM forward telomere primer (CGG TTT GTT TGG GTT TGG GTT TGG GTT TGG GTT), and 200 nM reverse telomere primer (GGC TG TCT CCT TCT CCT TCT CCT TCT CCT TCT CCT). We used the following primer pairs for the housekeeping gene analysis: 36B4u, CAG CAA GTG GGA AGG TGT AAT CC; 36B4d, CCC 135 ATT CTA TCA TCA ACG GGT ACA A. The standard of DNA quantum was verified using one reference sample diluted to these final concentrations: 0.02, 0.20, and 2.00 ng/µL. The cycling of each qPCR analysis (for both telomere and housekeeping gene) started with a ten-minute cycle at 95 • C, followed by 15-s cycles at 95 • C, ending with a one-minute cycle at 60 • C. We analyzed the difference in the relative telomere length between the 2nd and 7th passages. Differentiation Assay To verify the hypothesis that the intra-individual variability of DPSC phenotypes affects the ability to differentiate into osteocytes, chondrocytes, and adipocytes, we triggered osteogenesis, chondrogenesis, and adipogenesis in isolated stem cells. To determine whether the isolated cells were able to produce cartilaginous extracellular mass, chondrogenesis was initiated using the Differentiation Basal Medium-Chondrogenic (Lonza, Basel, Switzerland), supplemented with 50 ng/mL TGF-β1 (R&D Systems, Minneapolis, MN, USA). The chondrogenic medium was exchanged twice a week for three weeks. To provide osteogenic conditions, the standard cultivation medium was substituted with the Differentiation of Basal Medium-Osteogenic (Lonza) after cells reached 100% confluence; cells were cultivated in this medium for three weeks. We also changed the medium every third day. We used histological and immunocytochemical processing to reveal signs of successful differentiation. The pellets were fixed using 10% formalin, dehydrated in ascending concentrations of ethanol, embedded in paraffin, and cut into 7 µm thick sections. After deparaffination, the chondrogenic sections were stained with blue trichrome staining modified according to Masson or processed for anti-type II collagen immunocytochemistry. We used a primary mouse IgM antibody (1:500, Sigma-Aldrich) and a Cy3TM-conjugated goat anti-mouse secondary IgM antibody. Cell nuclei were counterstained with 4 -6-diamidino-2-phenylindole (DAPI, Sigma-Aldrich). The calcium deposits were visualized in sections using von Kossa histological staining. Osteogenic sections were also processed for anti-osteocalcin immunocytochemistry. After deparaffination, samples were exposed to a primary mouse IgG antibody (1:50, Millipore, Burlington, MA, USA) and donkey anti-mouse secondary IgG antibody (1:250, Jackson ImmunoResearch Labs, West Grove, PA, USA). Differentiation in adipocytes was induced with hMSC Adipogenic Induction medium (Lonza) and maintained with hMSC Adipogenic Maintenance SingleQuots (Lonza). Media were used subsequently and switched every three days for three weeks. For the fourth week, DPSCs were cultivated only in the hMSC Adipogenic Maintenance medium. Cultures were then fixed with 10% formalin and rinsed with 50% ethanol. Oil Red-O staining solution was applied afterwards for one hour at room temperature. The cells were observed using both a phase contrast and an inverted light microscope. Statistical Analysis All statistical analyses were performed using the statistical software GraphPad Prism 9 (San Diego, CA, USA). After normality had been ascertained using a Shapiro-Wilk test or Kolmogorov-Smirnov test, paired lineages were compared using paired t-test for continuous variables or a Wilcoxon matched-pairs test on ranks for nonparametric variables. Differences were considered statistically significant at p-values of ≤0.05. Conclusions We rejected the hypothesis that there is significant intra-individual variability of the DPSC profile within lineages isolated from different teeth of the same donor. We did not observe any significant effects on proliferation rate, viability, phenotype, or relive telomere length change. The only variances were seen in DPSC lineages isolated from the same donor, but the teeth were in different stages of root development. However, even in these cases, we did not find a statistical significance. The minor statistically non-significant heterogeneity in phenotype was probably because DPSC cell lines comprise heterogeneous groups of undifferentiated cells independent of the donor. We also did not observe any changes in the ability of cells to differentiate into mature cell lines-chondrocytes, osteocytes, and adipocytes. Funding: The study was financially supported by the Charles University program PROGRES Q40/13 and PROGRES Q40/06.
The value of home-based collection of biospecimens in reproductive epidemiology. Detection, quantification, and prognosis of environmental exposures in humans has been vastly enhanced by the ability of epidemiologists to collect biospecimens for toxicologic or other laboratory evaluation. Ease of collection and level of invasiveness are commonly cited reasons why study participants fail to provide biospecimens for research purposes. The use of methodologies for the collection of biospecimens in the home offers promise for improving the validity of health effects linked to environmental exposures while maximizing the number and type of specimens capable of being collected in a timely and cost-effective manner. In this review we examine biospecimens (urine and blood) that have been successfully collected from the home environment. Related issues such as storage and transportation will also be examined as well as promising new approaches for collecting less frequently studied biospecimens (including hair follicles, breast milk, semen, and others). Such biospecimens are useful in the monitoring of reproductive development and function. Whence Healthy Children? | Mini-Monograph The design of studies examining the relation between environmental exposures and human health, including subtle markers of human reproduction, usually incorporates the collection of one or more biospecimens from study participants. Such specimens can both minimize misclassification bias associated with exposure or disease status and provide cellular and molecular data that can contribute to an understanding of the physiological processes being impacted. However, a major complicating factor for studies collecting biospecimens, especially large population-based studies such as the proposed National Children's Study (NCS 2003), is that sample collection can be encumbered by the logistics and expense of obtaining biospecimens from participants who live in a variety of locations and lead a variety of lifestyles. One way to address this issue may be to incorporate home-based collection protocols that can be carried out by study participants with minimal to no oversight from study staff members. Such approaches have been widely if somewhat erratically used in previous epidemiologic studies. However, to our knowledge there has been neither discussion in the scientific literature on the utility or extent of the practice nor any assessment of its value in large longitudinal studies such as the NCS. We present a general overview of the utility of biospecimens and discuss those currently amenable to home collection, including how they can be collected and the type of research data they can yield. Impact on Recruitment and Compliance Rates One of the most useful aspects of home collection of biospecimens is the possible improvement of participant response. For example, a study by van Valkengoed et al. (2002) showed that in a screening program for asymptomatic Chlamydia trachomatis infections, mailing urine samples, as opposed to bringing them to the clinic, increased participation of male subjects by 18% (although no difference was noted among female participants). In another study, participation in a protocol to collect oral rinse samples was greater in the home-based group than in the clinic-based collection group (98% vs. 71%, respectively) (Harty et al. 2000b). There are several plausible reasons for this observed increase in participation by the home-based groups, primarily related to increased convenience for the participant. Special trips to the clinic are not required, and specimen acquisition and/or testing are performed in the privacy of the participant's home and at his/her convenience. In some cases the lack of interaction with the clinical environment may also add to a feeling of anonymity for the participant. For the aforementioned reasons, one could reasonably hypothesize that participation rates would be higher in studies using home-collection protocols than those requiring participants to attend clinics for specimen collection. However, at this time studies are insufficient to confirm this hypothesis, suggesting that such a study, encompassing different samples and different socioeconomic groups, would be well received and of great benefit to future epidemiologic research studies. Feasibility Issues for Home-Based Collection of Biospecimens When weighing the decision whether to use home-based biospecimen collection, investigators must consider five main issues: • Specimen collection • Specimen storage • Transportation of the specimen to the clinic or analytical laboratory • Stability of the specimen between collection and delivery • Reception, storage, and analysis of the specimen in the laboratory To address these issues, investigators must first determine the specifice use for these biospecimen(s). This in turn will determine how much sample will be needed, how it should be collected, when it should be collected, and how it should be stored and transported. Another consideration in feasibility assessment is the need for quality control of the samples to ensure their usefulness. Because many sources of error can be introduced in the collection and storage of biospecimens (Boone et al. 1995;Plebani and Carraro 1997), standard operating procedures must be developed and implemented regardless of whether specimens collected in the home are procured by research staff or by study volunteers themselves. Specific protocols must be prepared with simple, clear instructions suitable for people at all levels of education and that (in some cases) can be conducted unsupervised within the limits and with the equipment found in a residential dwelling. This means, for example, that for unsupervised home collections, toxic substances should not be included in collection kits, and samples should be amenable to storage at room temperature 4°C or -20°C. Biospecimens Amenable to Home Collection An eclectic body of work on both the acquisition and analysis of home-collected samples has emerged over the past few years. The most common biospecimens collected by investigators to date are urine, blood, and semen. In most cases, these and a number of other samples can be collected in the home (Table 1). In the next section, we discuss the types of data these samples can provide and the issues related to their collection and transportation. Urine. What can be measured in urine? Urine is one of the biospecimens most amenable to home collection. Many currently used biomarkers of reproductive health such as steroid hormones or their metabolites can be measured in urine. Because hormones play such a vital role in the maintenance of reproductive health, knowledge of an imbalance in one or more hormones can help illuminate the cause of health problems. In particular, the levels of these hormones are excellent indicators for studying various aspects of the female reproductive cycle. For example, daily sampling of urine throughout the menstrual cycle can assist in the evaluation of the dynamic functions of the hypothalamo-hypophysial-ovarian axis (Kesner et al. 1999;Scialli et al. 1997). Relative levels of steroid hormones in urine can also be used to estimate the day of ovulation through measurement of luteinizing hormone (LH, the basis for most commercially available ovulation predictor kits) (Kesner et al. 1998), or Mini-Monograph | Home collection of biospecimens Environmental Health Perspectives • VOLUME 112 | NUMBER 1 | January 2004 the relative concentrations of estrogen and progesterone metabolites in daily first-morning urine specimens (Baird et al. 1991(Baird et al. , 1995. Pregnancy can of course be detected quickly and conveniently using kits that detect the sharp rise in urine of human chorionic gonadotropin (hCG). Because commercially available home pregnancy test kits are sensitive, specific (Ehrenkranz 2002), and easy to use, at least one study has used them in lieu of urine collection to ascertain early pregnancy losses (Buck et al. 2002). Wilcox and colleagues (2001) recently reported that home pregnancy kits may have a false positive rate of 10% when used the first day after expected menstruation (assuming ovulation is delayed). Corroboration of these findings will underscore the importance of estimating ovulation in protocols involved in day-specific exposures or outcomes. Unfortunately, the relation between urinary hormone metabolites and abnormal reproductive function has not been well characterized across populations and particularly susceptible subgroups of the population or in individuals from disadvantaged or medically underserved backgrounds. As urinary hormone metabolites are not commonly reported during the evaluation of reproductive problems, Lasley and Overstreet (1998) called for clinical studies to compare concentrations in urine and blood as a first step in validating this approach. Nevertheless, numerous studies have been conducted on the actual measurement of reproductive hormones in urine, many of which are summarized in the Lasley and Overstreet review. Many of these assays have been refined over the past 10 years, making them cost effective, robust, and accurate. Furthermore, many are available commercially as kits for use at home by untrained personnel [e.g., for measuring estradiol and progesterone metabolites, LH, follicle-stimulating hormone (FSH), and hCG]. Steroid hormone data are not the only information that can be derived from urinary samples. An enormous range of biological, biochemical, and chemical substances can be detected in urine (e.g., CDC 2003), including microbes, pesticides, solvents, and drugs. It has long been known that microbial infection during gestation can adversely impact fetal development (e.g., Rubella) and in some cases may result in death (Embleton 2001). Such infections can sometimes be detected in the urine of parents through the use of enzyme-linked immunosorbent, ligase chain reaction, or polymerase chain reaction (PCR) assays. A variety of viruses (papillomavirus, hepatitis B virus, HIV, cytomegalovirus, polyomavirus, adenovirus), bacteria (Neisseria gonorrhoeae and Chlamydia trachomatis), and mycoplasma (Mycoplasma genitalium) can be detected in this way. Furthermore, because of the stability of DNA and the robustness of the PCR process, assays for infections such as cytomegalovirus can be performed on urine collected on filter paper (Yamamoto et al. 2001). This has the potential to facilitate shipping (it is easier to ship filter papers than specimen collection vials), reduce the biohazard potential of the sample during transportation (sample is effectively transported as a solid and therefore cannot leak), and make storage of the sample at the analytical facility more convenient. Exposure to pesticides before or during pregnancy can adversely impact development, leading to impaired neurological, immunological, and reproductive function in the offspring (reviewed by Sever et al. 1997). For example, male and female greenhouse workers exposed to certain pesticides have an increased time to pregnancy compared with that of unexposed workers (Abell et al. 2000a(Abell et al. , 2000bPetrelli and Figa-Talamanca 2001). For children born to a cohort of male pesticide applicators, significantly more birth defects occurred in children conceived in the spring than in any other season (Garry et al. 2002). In the same study there was a modest but significant increase in risk (1.6-to 2-fold) for miscarriages and/or fetal loss occurring throughout the year, suggesting a potential association between pesticide exposure and reproductive outcome. In most cases it is unknown whether this connection is through a direct toxic effect on parental gametes or reproductive organs, an adverse impact on the paternal or maternal endocrine system, or direct toxicity to the developing embryo/fetus. However, it is clear that the ability to measure pesticide metabolites in urine may help explain delayed conceptions, aborted pregnancies, and developmental problems by determining if one or both partners have been exposed to pesticides. Many different pesticides have been measured in urine (reviewed by Aprea et al. 2002). They are generally measured using methods such as mass spectrometry, liquid chromatography-mass spectrometry, gas chromatography with electron capture, gas chromatography-mass spectrometry, or high performance liquid chromatography. For example, exposure to the organophosphorus pesticides chlorpyrifos and chlorpyrifos-methyl can be determined by measuring their specific metabolite, 3,5,6trichloro-2-pyridinol, in urine samples (Koch et al. 2001). As with pesticides, solvent exposure can lead to reduced fecundibility in both males (Cherry et al. 2001) and females (Sallmen et al. 1995) and lead to developmental problems when exposure occurs in utero (Scheeres and Chudley 2002;reviewed by Lindbohm 1995). Again, exposure to solvents can be detected in urine samples either directly or by measuring metabolites or biomarkers. For example, several investigators have measured toluene exposure in urine, using benzylmercapturic acid (Inoue et al. 2002), hippuric acid, o-cresol, and toluene itself (Kawai et al. 1996). Analysis of cotinine (a metabolite of nicotine) in urine is used frequently as an indicator of exposure to cigarette smoke (active or passive), although blood or semen can be substituted for urine in this regard (Vine et al. 1993). Such exposures can have an adverse impact on fertilization and embryo development and are therefore of potential interest in fertility and pregnancy studies. How can collection of urine be conducted in the home environment? Using simple collection protocols and storage procedures that are complicit with the facilities available in the home of an average study participant, investigators found that hormones [e.g., LH and FSH (Kesner et al. 1998(Kesner et al. , 1999], solvents [e.g., benzene and toluene (Senzolo et al. 2001)], pesticides [e.g., 2,4-dichlorophenoxyacetic acid (Hu et al. 2000)], and/or metabolic products thereof [e.g., organophosphorous pesticides (Curl et al. 2003;Hu et al. 2000)] are stable in urine. Hence, home collection of urine samples by study volunteers is a feasible sampling approach (Macleod et al. 1999), and various protocols have been developed that create minimal inconvenience for study participants. When samples are collected at home, they are typically stored frozen. Freezing allows storage of a large sample volume, and multiple samples can be accrued and/or combined. Thus, the number of analytical measurements that can be made is essentially unlimited. In a study by Reutman et al. (2002), first morning urine samples were collected into vials containing glycerol at a final concentration of 7%. The glycerol prevents freeze-induced activity loss of LH and FSH . The samples were stored in the participant's freezer (-20°C) until the end of the study, then shipped en masse in dry ice by express courier to the analytical laboratory. LH, FSH, estrone 3-glucuronide (a metabolite of estradiol), and pregnanediol 3-glucuronide (a metabolite of progesterone) were all successfully measured in these samples. Although reproductive hormones are fairly stable in urine and home collection of urine for metabolite analysis has been carried out in collection vials such as those described above, a more convenient method for a large longitudinal cohort study might be to develop a system for home collection of urine samples on filter paper. Such samples would be even easier to store until collection or mailing. In most cases they could be mailed in an envelope at ambient temperature, speeding delivery and minimizing costs. Furthermore, storage of filter paper requires less space than vials, an important consideration for long-term studies where tens or hundreds of thousands of samples are accumulated. Such a convenient system would facilitate daily or more frequent sampling with minimal inconvenience to study participants. The question remains as to what can be measured from such biospecimens. Shideler et al. (1995) used samples collected on filter paper for analyzing steroid hormone (estrogen and progesterone) metabolites in urine. Hormone metabolite analysis of the paper-stored samples was comparable to results obtained from analyses of the original liquid samples. Furthermore, storage of up to 1 year had no effect on hormone concentrations. This technology may also lend itself to the analysis of many other metabolites, including those derived from pesticides, drugs, and other toxicants. For example, McCann et al. (1995) quantified orotic acid (vitamin B13) from such samples. Despite these findings, it is clear that further pilot studies examining the optimal filter paper to use and the range of metabolites that can be detected need to be carried out before this method of storing and shipping home-collected specimens can be considered more seriously. Collection of urine from infants, potentially one of the most highly sampled subgroups in a children's longitudinal study, offers a unique challenge. Several approaches have been used, including collection pads (placed inside diaper), U-bags, and clean catch into sterile bottles. A study by Liaw et al. (2000) found that all approaches were equally effective at excluding infection and avoiding contamination of samples. However, parents preferred collection pads because they were easier to use and most comfortable for the infant. Robertson and Fortmann (Unpublished data) are attempting to simplify the process even further by developing a system for extracting urine from disposable diapers and analyzing it for biomarkers of pesticide exposure and creatinine. Collection of samples from toilet-trained children (approximately 2-5 years of age) is less challenging, though special measures may still be needed. For example, in a study of organophosphorous pesticide exposure by Curl et al. (2003), parents were provided with a commode specimen collection pan and polypropylene bottles to store the samples. Children urinated either into the commode inserts, the contents of which were then poured into a polypropylene bottle, or directly into the bottles. Urine collection bottles were stored inside the plastic container in the families' refrigerators overnight until researchers retrieved them the following day. Blood. What can be measured in blood? Blood is a relatively accessible and informative tissue, although its collection is somewhat invasive and many study participants refuse to donate blood specimens for research purposes. Numerous naturally occurring biochemical molecules can be found in blood, including hormones, various other proteins, and chemical metabolites. Environmental pollutants such as organochlorine compounds (Mussalo-Rauhamaa 1991), dioxins (Smith et al. 1992), polychlorinated biphenyls (PCBs) (Schuhmacher et al. 2002), hexachlorobenzene, and 1,1-dichloro-2,2-bis(p-chlorophenyl)ethylene (p,p´-DDE) (Becker et al. 2002) can also be measured in blood. Metals such as beryllium, cadmium, manganese, mercury, and lead (Becker et al. 2002;Schuhmacher et al. 2002), and drugs such as nicotine (or its metabolite cotinine), cocaine, and caffeine (Dempsey et al. 1998) can be measured in blood as well. Nucleated cells (primarily leukocytes) can be obtained from blood. These are useful in that they can be cultured ex vivo to examine how they respond to various cytokine and chemical exposures, which may give some idea how the individual would respond if exposed to the same. Leukocytes can also provide RNA for gene expression analysis (Rockett et al. 2002) and DNA, which can be used for polymorphism and sequencing analysis and to detect chromosomal aberrations (Sorokine-Durm et al. 1997) and DNA adduct formation (Poirier 1997). How can collection of blood be conducted in the home environment? Proper extraction, handling, and transportation procedures are most important in blood collection to prevent injury to the donor, protect the collector from accidental exposure to infectious microbes, and maintain specimen integrity. With few exceptions, the home collection of blood requires the presence of a trained phlebotomist. Even so, if a blood draw is prepared and transported under improper conditions, laboratory test results can be altered. A recent review by Becan-McBride (2002) examines in detail how samples collected in nonclinical environments such as the home should be transported to avoid specimen transportation errors. Depending on the analyte(s) to be measured, the blood may need to be kept within a certain temperature range. It is also necessary to use appropriate collection tubes. A variety of tubes are available from several suppliers. Different additives are included in the tubes depending on what the sample will be used for. For example, sodium heparin and sodium-EDTA tubes are generally used where trace element, toxicology, and nutritional analyses will be conducted. Potassium oxalate/sodium fluoride tubes are used for glucose determinations. Tubes containing clot activators and gel are used for preparing serum for hormone and other analyses. For further details see the BD Vacutainer tube guide (Becton, Dickinson and Company 2003). Other tubes are available (PaxGene blood collection tubes; Qiagen Inc., Valencia, CA; http://www1.qiagen.com/default.aspx) whose contents "freeze" cell transcription and preserve RNA for gene expression studies, though these are currently approved for research use only. In most cases, unnecessary shaking must be avoided to prevent hemolysis, and samples must sometimes be returned to the analysis laboratory within a strict time frame to permit certain assays to be performed. Blood is transported in a collection tube. However, as with urine, studies determining the value of collection and transportation of blood specimens on filter paper have been conducted. The idea of collecting blood samples on filter paper has been around for more than 30 years (Hill and Palmer 1969). Parkes et al. (1999) recently reported an at-home test that combined a filter paper technique for spotting capillary blood with an immunoturbidometric assay for measuring hemoglobin A1c. Others have successfully used a similar approach to measure glucose (Ward et al. 1996) and HIV infection (Spielberg et al. 2000), and these discoveries have prompted the commercial development of home collection kits. At least two companies currently market products for consumers to check their HIV status. Both kits require the individual to collect a small blood sample from her/his fingertip and mail it to a designated medical laboratory for analysis. Study data seem to support the accuracy and reliability of HIV testing through at-home collection, demonstrating 99.9% accuracy. Other companies (e.g., FlexSite Diagnostic Inc., Palm City, FL; http://www.flexsite.com/ pgs/3.html) specialize in the development of products and services that allow complex diagnostic tests to be conducted in the home and other nonlaboratory settings. A FlexSite product near release is a cholesterol profile kit that uses a patented device to collect a dried blood sample for measuring total cholesterol, highdensity lipoprotein cholesterol, and triglycerides, along with a computed low-density lipoprotein cholesterol. The device (SerSite) separates serum from blood cells as it absorbs the blood. The blood sample is then mailed in a dry state to the FlexSite clinical laboratory for analysis. Although such kits exist or are under development, they are marketed primarily to medical workers who take patient samples outside the clinical environment and thus are not designed for use by the general public. Home collection of blood is a limited option for large studies for the following reasons: • Most important, obtaining reasonable quantities of blood (a few milliliters) via a venous puncture is a somewhat invasive procedure with possible adverse side effects if carried out incorrectly. As such, it requires the supervision or assistance of a trained phlebotomist. • Although small amounts of blood can be obtained by finger prick, many people are reluctant about lancing their fingers or are hesitant about the sight of their own blood. Mini-Monograph | Home collection of biospecimens Environmental Health Perspectives • VOLUME 112 | NUMBER 1 | January 2004 • Blood is often harder to obtain from children and parents are sometimes reluctant to approve the procedure. • Many potentially useful analytes in blood are labile and samples require either immediate processing or a controlled storage of the type not normally found in the home. Thus, although blood is one of the most useful of biospecimens on which a variety and types of assays can be conducted, home collection may not be feasible for a large epidemiologic study that relies on study participants alone for the collection of blood. Perhaps the only exceptions to this are certain groups of individuals (e.g., diabetics, nurses) who are trained and/or are familiar with taking small samples of their own blood. Such people may provide good cohorts for bloodbased subsampling studies. Semen. What can be measured in semen? Of late, semen collection is receiving increasing attention with respect to home collection. Semen can be analyzed to evaluate sperm characteristics and to measure hundreds of biological and chemical components of seminal fluid including steroid hormones, sugars, vitamins, enzymes, proteins, and metals. In addition, environmental exposures can introduce xenobiotic compounds such as pesticides and heavy metals into seminal fluid (Kumar et al. 2000). This has two possible ramifications. First, many kinds of exposures and chemicals affect human sperm quantity and quality. Second, the vagina absorbs a number of components of semen that can be detected in the female bloodstream within a few hours of sexual intercourse (Benziger and Edelson 1983;Sandberg et al. 1968). This has possible implications for the exposure of embryos and fetuses to components of seminal fluid by both intracanicular and bloodborne routes and may even serve as a route of exposure for women. Many xenobiotics, metals, and naturally occurring biochemical components of semen have been measured to determine how their presence relates to fertility (Lay et al. 2001;Younglai et al. 2002). In addition to measuring biochemical components of the semen, one can conduct routine sperm analysis (concentration, motility, morphology) (Davis and Katz 1989;WHO 1999), evaluate more specific markers of sperm function such as genetic and chromosome integrity (Evenson et al. 2002;Perreault et al. 2000), and conduct gene expression profiling experiments (Ostermeier et al. 2002). How can collection of semen be conducted in the home environment? Various investigators have developed prototype collection and transportation kits, such as the TRANSMEM100 (Royster et al. 2000). These kits can be distributed to study participants, and then collected by the study personnel, delivered by the participant, or shipped directly to a central laboratory for analysis once the sample has been collected. The TRANSMEM100 collection system was intentionally made simple, requiring only that the subject collect the semen sample in a toxicology-tested specimen jar, place the jar in a biohazard bag and secondary container, close the package, and call the overnight courier for pickup. Illustrated instructions are included with the kit. The initial pilot study on the utility of the TRANSMEM100 indicated that 65-80% of the samples were received in the laboratory the day after they had been collected and were of sufficiently good quality to carry out a number of standard measurements such as semen volume, and sperm number, concentration, and morphology. Recently, the TRANSMEM100 was tested for sample stability with regard to newer, more specific tests of sperm nuclear integrity. For example, the sperm chromatin structure assay (SCSA; SCSA Diagnostics, Inc., Brookings, SD; http://www.scsadiagnostics.com/) detects increased susceptibility to acid-induced DNA damage in sperm and is a measure of sperm genomic integrity (Evenson et al. 2002). SCSA results were comparable when semen samples were frozen right after collection or after 24-hr storage at 4°C, but the percentage of abnormal cells increased significantly if samples were kept at room temperature for the 24 hr before freezing (Morris et al. 2003). These findings indicate that inclusion of cold packs during overnight shipment would be necessary to ensure sample stability for this assay. On the other hand, an assay measuring chromosome breakage in sperm gave comparable results in fresh semen and semen stored at room temperature for 24 hr (Young et al. 2003). Other approaches range from the relatively advanced Bio-Tranz (Zavos Diagnostic Laboratories, Inc., Lexington, KY; http:// www.zdlinc.com/index.htm) shipping system (Zavos et al. 1998), for shipping semen at low temperature in protective medium to allow clinical diagnosis of infertility, to a simple process in which semen was frozen in condoms by study participants and later collected by the study organizers (Arbuckle et al. 1999). Although such simplified storage and collection procedures reduce the number of fecundity markers that can be measured in fresh, whole semen, they can still be used to measure the presence of pesticide metabolites and other stable biochemical molecules. Although none of these home-collection systems have been thoroughly characterized for their ability to maintain the integrity of all the numerous analytes that can be measured in semen, they have clearly proven useful for measuring a number of useful parameters (e.g., certain sperm measures and pesticide levels). There is clearly a trade-off between the cost and complexity of the transportation system and the number of end points that can be measured in the laboratory. These considerations will need to be weighed in relation to study purpose to determine their ultimate applicability for field-based research. Other Potential Biospecimens for Home Collection Biospecimens with the potential for home collection need to be informative, accessible, and easy to produce. Saliva, milk, hair, hair follicles, nail, and buccal cells may also prove to be viable alternatives, depending on the biological marker being measured, and have been used with varying degrees of success. Saliva. Steroid hormones normally are measured in urine and blood. However, several can also be measured in saliva, including dehydroepiandrosterone (DHEA), dihydrotestosterone, testosterone, estradiol, estrone, progesterone, and androstenedione cortisol. The use of saliva as a biospecimen offers several advantages: • Ease of use: Saliva specimens can be collected anywhere at any time and at a much lower cost than blood collection. For example, collection of saliva samples from highly mobile flight attendants also was reported to produce high-quality samples for analysis (Whelan et al. 2002). • Saliva collection is noninvasive and less stressful than venipuncture and thereby less likely to alter markers responsive to physiologic/ psychologic stress. • Saliva collection is more feasible when collection at timed intervals is desired (e.g., early morning). • Hormones in saliva are exceptionally stable. They can be stored at room temperature for at least a week without loss of activity. The choice of a saliva collection method should be tailored to the individual hormones to be quantified, as studies have shown that certain types of collection methods, such as the cotton-based Salivette system (Sarstedt, Newton, NC; http://www.sarstedt.com/php/ main.php?SID=c1a82144d94e2efe0f11ec7ba7 4b3f72&language=en), can produce artificially elevated levels of certain hormones, including DHEA, testosterone, and estradiol (Granger et al. 1999a(Granger et al. , 1999bShirtcliff et al. 2000Shirtcliff et al. , 2001. With this in mind, several methods are available currently for home collection of saliva. The first is to request study participants to expectorate up to 3 mL of saliva into a wide-mouthed container over a period of 10 min (Riad-Fahmy et al. 1987). The second is to ask participants to chew on a 6-inch cotton dental roll (Hertsgaard 1992). A portion of the saturated roll is subsequently placed into a needleless syringe and the saliva expressed into vials for analysis. The most common method, however, is to use the commercially available Salivette collection system. Study participants are asked to chew on a polyester roll for 3 min before placing it into a plastic tube for shipping and analysis. Once the sample arrives at the laboratory, the tube is centrifuged to recover the saliva. Although three different versions of the Salivette system are available, studies have shown that the polyester insert form without citric acid crystals is generally the most appropriate for research purposes (Lamey and Nolan 1994;Schwartz et al. 1998), as the use of citric acid crystals to stimulate saliva flow can interfere with certain assays by lowering the pH of the sample (Schwartz et al. 1998). Salivette samples will begin to mold after 4-7 days; thus, it is recommended that they be stored at -20°C, if possible. Samples can later be shipped (on dry ice) to the testing facility via regular mail. Of course there are also limitations to using saliva. The number of hormones that can be measured in saliva is fewer than can be measured in blood, and unlike blood, saliva does not provide live cells for other types of studies such as RNA expression analysis. Breast milk. Research on chemical contaminants in breast milk spans several decades and dozens of countries. Results indicate that a wide range of chemical contaminants may enter breast milk, including organochlorine pesticides, PCBs, polychlorinated dibenzop-dioxins (PCDDs), polybrominated diphenyl ethers, metals, and solvents (Solomon and Weiss 2002). These findings have highlighted gaps in current knowledge about this postnatal route of exposure, including the lack of information on the nature and levels of contaminants in breast milk and the lack of consistent protocols for collecting and analyzing breast milk samples. Breast milk contaminants are of particular interest where breast-fed infants are concerned, as many of the contaminants identified thus far have developmental effects in rodent models. Developmental effects in humans are not well characterized, and there is a general lack of data on health outcomes that may be produced in infants by exposure to chemicals in breast milk. However, in the studies conducted thus far, there is evidence that exposure to PCBs both pre-and postnatally through breast milk does have subtle negative effects on neurologic and cognitive development of children up to school age (Vreugdenhil et al. 2002;Walkowiak et al. 2001). Reproductive effects may also occur. For example, Blanck et al. (2000) found that in utero and lactational exposures to polybrominated biphenyls were associated with an earlier age of menarche. Despite such studies, there remains a general paucity of data on outcomes related to infant exposure via breast-feeding, particularly those with a time-dependent nature. This information is necessary for performing exposure assessments without heavy reliance on default assumptions. Landrigan et al. (2002) thus called for "a carefully planned and conducted national breast milk monitoring effort in the United States" to provide the information needed to assess infant exposures through breast-feeding and to develop scientifically sound information on the benefits and risks thereof. Collection of milk in the home environment for contaminant analysis is relatively simple. In a recently completed prospective pregnancy study that recruited women from 16 counties upon stopping birth control, women who later gave birth and initiated breast feeding were asked to provide at least one breast milk sample using a standardized protocol (100% compliance). Mothers successfully collected and shipped fresh milk samples along with a freezer pack to a toxicologic laboratory via Federal Express (Buck et al. 2002). Frozen breast milk samples have been used to analyze levels of PCBs, PCDDs, polychlorinated dibenzofurans, and numerous organochlorinebased compounds (Hooper et al. 2002). Further information on the collection and archiving of human milk may soon become available from the U.K. Department for Environment, Food and Rural Affairs. The department is currently cofunding a pilot project for establishing a U.K. human milk archive of representative samples that will be available for chemical analysis for up to 10 years (H.M. Government Department for Environment, Food and Rural Affairs 2002). The study aims to develop methods for recruiting participants, establish robust procedures for collection, transport, storage, and analysis of breast milk, and perform initial analysis for PCBs, dioxins, and phytoestrogens. Data from this study will provide information on temporal trends of environmental contaminants and an assessment of infant risk from exposure via human milk. Hair/nail. The main advantage of using hair and nail is that is that they can be collected in a safe and noninvasive manner. However, being keratinous in nature and containing no living cells, hair and nail are not often considered of high value for many epidemiologic and exposure studies. Nevertheless, they may be useful biospecimens under certain circumstances. Hair, for example, has been used for decades as a timeline for exposure to heavy metals. Many heavy metals impact fertility and fecundity, either through direct toxicity in the reproductive organs, adversely affecting the endocrine system, or both. Lead, chromium, and cadmium, for example, reduce human semen quality (Li et al. 2001;Telisman et al. 2000). Female exposure to mercury alters estrous cyclicity in rats (Davis et al. 2001), and recently, blood lead levels were negatively associated with puberty milestones in girls (Selevan et al. 2003;Wu et al. 2003). Hair analysis can indicate exposure to numerous toxic metals including mercury, lead, arsenic, aluminum, and cadmium, and numerous other metals including calcium, zinc, manganese, cobalt, iron, potassium, sodium, and titanium. Although a useful tool for detecting chronic exposures, hair is not suitable for detecting very recent metal exposures (a blood test is required for this). The clipping, storage, and transportation of nail and hair is clearly a simple task that can be conducted by most individuals. If the hair is long enough, it is generally taken from the nape of the neck. About an inch in length of the hair closest to the skin is needed in a quantity approximating a heaped teaspoon full. For bald men or those reluctant to donate head hair, a viable alternative is to take samples from the under arm or pubic area. A little-tested use of hair and nail is gene sequencing. Mitochondrial (mt)DNA can be found in hair and nail Schreiber et al. 1988). Some groups have successfully tested hair and nail as an alternative to blood DNA for genotyping of polymorphic drug-metabolizing enzymes (Tanigawara et al. 2001). One possible disadvantage of using hair and nail is that they can be easily contaminated with extraneous biological and chemical material (e.g., dirt under the nails; shampoo and dye residues in hair) that can sometimes complicate tissue analysis. Hair chemicals, including dyes, can contain lead that will attach to the hair and may contaminate the sample. The most accurate results thus come from hair that has not been chemically treated for at least 2 months. Hair follicles. An aggressive pluck of a human hair will usually remove the root follicle along with the hair. In about 90% of cases, this specimen (trichogram) will be of a hair in the actively growing phase (anaphase), and therefore likely to yield sufficient quantities of goodquality RNA to support gene expression analysis. RNA is notoriously quick to degrade in samples once they are detached from a body. Therefore, to maintain the integrity of the RNA, such samples normally need to be processed quickly. Ambion, Inc. (Austin, TX; http://www.ambion.com/) has recently developed a storage product called RNALater, an aqueous, nontoxic tissue storage reagent that rapidly permeates tissues to stabilize and protect cellular RNA (Ambion 2003). Once in RNALater, RNA is stable for 1 week at room temperature, 1 month at 4°C, or indefinitely if frozen. These properties could be used to develop a simple kit that permits the home collection, storage, and shipping of hair follicles without specialized protocols or equipment. Previous studies have found the yield of RNA from such human hair follicles to be in the region of 0.9 µg per whole follicle (Mitsui et al. 1997). This is sufficient for small numbers of limited gene expression profiling analyses Mini-Monograph | Home collection of biospecimens Environmental Health Perspectives • VOLUME 112 | NUMBER 1 | January 2004 using reverse transcription-PCR (RT-PCR). RT-PCR-based gene expression profiling carried out on RNA extracted from hair follicles has yielded information on gene expression of growth factors (Mitsui et al. 1997) and enzymes (Chang et al. 1997). Unfortunately, RT-PCR assays are a rather limited form of gene expression analysis in that they can measure expression of only a few genes at a time. The recent development of DNA arrays (see reviews by Dix 1999, 2000) has overcome this problem. Such arrays can detect the expression of many tens of thousands of genes simultaneously. Unfortunately, most require more RNA than can be obtained from a few hair follicles. The solution may be to incorporate a preamplification step prior to labeling and hybridization of the RNA sample. Fink et al. (2002) used this approach successfully in carrying out microarray analysis of RNA extracted from laser capture microdissection samples. Although the parity between array data from preamplified and regular RNA samples has yet to be fully established, early indications are that the gene expression patterns are fairly comparable. Alternatively, rolling circle amplification of the bound probe following hybridization to the microarray (Nallur et al. 2001) may prove a viable alternative. In the future such studies using RT-PCR or gene array analysis may provide information on exposures through the identification of certain gene expression patterns. However, this approach is currently in embryonic form and will probably be without practical application for 5-10 years. Buccal cells. Perhaps the most useful application of buccal cells (the epithelial cells lining the inside of the cheeks) is as a source of DNA for genotyping studies. However, buccal cells have also been used as a source of material for immunoassays (Byrne et al. 2000) and appear to be a good source of tissue for monitoring human exposure to inhaled and ingested occupational and environmental genotoxicants. Results of a study by Burgaz et al. (2002) suggested that occupational exposure to organic solvents may cause cytogenetic damage in buccal cells and that use of exfoliated buccal cells appears appropriate to measure exposure to organic solvents. These studies together demonstrate how the same samples can sometimes be used to measure markers of both exposure and effect, thus helping to maximize the amount of useful information that can be obtained from a sample and improving the cost-benefit ratio. Exfoliated buccal cells can be collected quickly, easily, and conveniently. Several methods of collection have been described, including the use of special cards (Harty et al. 2000a), cytobrushes (Garcia-Closas et al. 2001), cotton swabs (Koletzko et al. 1999), saline rinses (Hayney et al. 1995), and mouthwash. Lench et al. (1988) originally demonstrated that sufficient human DNA for gene analysis can be isolated from buccal cells obtained by mouthwash. Lum and Marchand (1998) later confirmed this, showing that good-quality DNA suitable for PCR-based genotyping could be obtained from 10 mL undiluted commercial mouthwash swilled in the mouth for 60 sec, then expelled into a collection container. For home collection of buccal cells (at least for DNA analysis purposes), the aforementioned studies and others have shown that collection using the mouthwash approach gives greater yields than other methods and is feasible for use in cohort studies. Lum and Marchand (1998) reported that storage of the unprocessed specimens at room temperature or at 37°C for 1 week (temperature conditions that may be encountered when mailing samples) did not affect the DNA yield or ability to PCR amplify the samples. Study workers for the National Birth Defects Prevention Study (2003) have successfully developed and used kits for the collection of buccal cells, which are sent to study participants through the mail (Rasmussen et al. 2002). The kit contains an informed consent form, simple instructions, materials for collecting the specimens, a small monetary reimbursement in the form of a money order, and a prepaid U.S. Mail packet for specimen return. The main caveat of using buccal cells for DNA is that researchers should be aware of the likely presence of nonhuman DNA in the extracted specimens. This can originate from food residues and from microflora that live in the digestive system and respiratory tract. Such nonhuman DNA is normally not problematic for PCR and other hybridization studies as long as appropriate (i.e., specific) probes and primers are selected. Vaginal swabs. Bacterial vaginosis (BV) is an alteration of the vaginal flora where the normally predominant Lactobacilli are replaced by a cocktail of other organisms, including those responsible for sexually transmitted diseases (STDs) (e.g., Neisseria gonorrhoeae, Chlamydia trachomatis, and Trichomonas vaginalis). BV has been associated with a number of adverse outcomes in pregnant women, including late miscarriage, premature rupture of membranes, preterm delivery, postpartum sepsis and postpartum endometriosis (Gravett et al. 1986;Hay et al. 1994;Jacobsson et al. 2002). Prevalence of BV varies among different groups of women, but recent studies suggest that between 4 and 61% may suffer from the disease, including as many as 20% of pregnant women and 12% of adolescent virgins (Priestley and Kinghorn 1996). The diagnosis of BV normally requires the procurement of vaginal swabs. These can be used to identify agents of infection through microscopic observation, microbial culture, or nucleic acid amplification technologies. There is evidence to suggest that collection of vaginal swabs may be amenable to home-based strategies and have the added benefit of increasing participation. In an office-based study by Smith et al. (2001), participants were offered the choice of STD screening in the context of a traditional pelvic examination or the use of self-obtained vaginal swabs. All eligible participants chose the latter, suggesting that most female patients are comfortable obtaining such samples. In a different approach, clinical staff were successful in visiting the homes of community-based trial participants and collecting self-administered vaginal swabs (Wawer et al. 1998). Compliance with interview, sample collection, and treatment in this study was over 90%. The home collection of vaginal swabs raises two issues. The first is the potential biohazard issue to other family members or visitors, although this could conceivably be overcome by the use of carefully designed containers. The second is that storage of samples or delay in getting them to the laboratory may adversely impact identification of infectious agents using standard microscopic or culture techniques. An appropriate method for the collection, storage, and transportation of selfadministered vaginal swabs thus needs to be determined. Alternatively, nucleic acid amplification methods are available (Smith et al. 2001) that are both sensitive and adaptable to high-throughput assays. Home-Based Analysis of Biospecimens Recent increases in technology now permit participants to not only collect biospecimens at home, but also analyze them. Using commercially available kits, individuals are now able to track the timing of their fertile window, identify pregnancies, and estimate sperm concentrations in semen, all in the privacy of their own home. These new techniques offer promise for research purposes, as they allow investigators to collect information without the time and cost associated with having samples analyzed at a remote location. Home fertility monitors. Home fertility monitors are now available that allow women to detect both the occurrence and timing of ovulation. Several kits are commercially available, including ClearPlan Easy (Unipath Ltd., Bedford, Bedfordshire, UK; http://www.unipath.com), Ovuquick One-Step (Quidel Corp., San Diego, CA; http://www.quidel. com/Home.php), and Surestep (Applied Biotech, Inc., San Diego, CA; http:// www.abiapogent.com). The kits work by detecting the LH surge in urine with varying levels of sensitivity, ranging from 35 mIU/mL (SureStep) to 50 mIU/mL (ClearPlan Easy) (Nielsen et al. 2001). The ClearPlan Easy kit offers an advantage over the other methods in that the system will store basic fertility data for up to 6 months. The information is stored on a data card that can be easily transported to the study site for download to a personal computer. The information stored includes the start date of each cycle, the cycle length, the date the LH surge, and the dates of intercourse. Other home-based ovulation detection systems, such as the Lady Free Biotester (TK Yun, South Korea) and the TCI OvuLook (TCI Optics, Inc., Kapaau, HI; http://www.ovulook.com/), detect ovulation based on salivary ferning (Barbato et al. 1993). The latter also has a built-in tracking system that allows women to see and refer back to their saliva patterns over time, thus providing a system that could be useful in epidemiologic studies. Home pregnancy tests. In 1999 approximately 19 million home pregnancy test kits were sold in the United States (Lipsitz 2000). Among a sample of women with children, approximately 33% reported using a home pregnancy test prior to seeking care from a professional (Jeng et al. 1991). The currently available home pregnancy test kits use monoclonal antibodies to detect hCG in urine. Although the manufacturers of home pregnancy test kits claim that they are 97-99.5% accurate, recent work suggests that the individual sensitivities and specificities vary somewhat by brand when the products are used in the general population (Bastian et al. 1998). Many kits instruct women to test their urine as early as the day that menses is expected. However, using women's self-reported average cycle lengths, Wilcox et al. (2001) estimated that the maximum possible sensitivity on the day that menses is expected is 90%. Only after 1 week was the sensitivity found to be about 97%. The high false-negative rate seen among home pregnancy tests has been attributed to two factors. Without the aid of a fertility monitor to pinpoint ovulation, women who ovulate late in their cycle may end up testing too early (i.e., before implantation has had time to occur). Comprehension of testing protocols needs to be established to ensure that study participants are using kits correctly and yielding valid data. We are aware of only one study that reported difficulties associated with understanding the instructions in home pregnancy tests. In this study Daviaud et al. (1993) reported that 230 of 478 positive pregnancy tests were falsely interpreted as negative, with difficulty in understanding the directions being cited as the primary reason for the error. Although there may be some concern that home pregnancy tests are marketed to the general public as being highly sensitive and specific throughout testing, they remain an extremely valuable data collection methodology in a research setting. Home pregnancy test kits allow investigators to detect early losses in situations where the collection of urine for analysis at a remote location is not feasible. Through careful instruction, women can be shown how to use the kits in a manner that reduces the chances of generating false data. At-home screening for male infertility. Recently, an at-home test kit became available (FertilMarQ; Embryotech, Wilmington, MA; http://www.embryotech.com) that allows men to evaluate one aspect of semen quality, sperm concentration, in the privacy of their home. According to the instructions, semen samples are to be collected by either masturbation or intercourse (with a special condom) following a 3-day period of abstinence. After allowing the semen to liquefy in the provided cup for at least 15 min, it is transferred to the testing well via dropper and combined with the appropriate reagents. A light blue color in the test well indicates that the individual has a sperm concentration of < 20 million/mL, which is consistent with the operational definition for oligospermia (Rowe et al. 1993). The test is to be repeated 3-7 days after the first test to confirm the finding. According to the manufacturer, the overall accuracy of the test is 78%. Although the test cannot provide a definitive answer regarding the presence of male infertility, it may serve as a useful screening tool in certain populations. Limitations of Home-Based Collection and Analysis of Biospecimens Noncompliance. In many cases, home collection of specimens may seem to be an ideal solution to some of the limitations of clinicbased studies. However, it is not without its potential problems. Although the convenience factor may be an important consideration, it is prudent to recognize that the advantages of home-based sample collection are potentially offset by noncompliance with collection and storage instructions. For example, with respect to many sensitive reproductive end points such as hormonal profiles, the need for specimen collection timed to a menstrual cycle or to accommodate diurnal or other fluctuations can be more difficult (though not impossible) to capture successfully in the home. Range of assays. Many potentially useful assays and experiments cannot be conducted on biospecimens collected and stored in the home. Home collection clearly lacks the clinical and/or scientific environment (i.e., equipment and specialized training) necessary to collect and process certain analytes. For example, if live cells are needed (e.g., blood cells for in vitro exposures or semen for sperm motility measures), the samples need to be collected in an environment where they can be processed quickly before the cells die. Introduction of error. A further complicating factor is that many laboratory and healthcare workers overlook the impact of specimen collection, storage, and transportation on medical errors. Specimen collection and transportation originating outside the laboratory can increase laboratory error rates. Indeed, 46-68% of laboratory errors occur in the preanalytical rather than in the analytical and postanalytical phases (Boone et al. 1995;Plebani and Carraro 1997). Thus, when planning an epidemiologic study involving the collection of field specimens, the nature of the specimens and the types of measurements that will be derived from them must be carefully considered to determine the best way to obtain, store, and transport the specimens to maintain the integrity of the target parameters (e.g., steroid hormones, pesticide metabolites, DNA, RNA). Transportation of biospecimens. Movement of samples from the homes of study participants to the clinic or analytical laboratory is perhaps the main factor when considering the suitability of home collection, as it usually involves the most time and expense and increases biohazard concerns. Transportation can take place via one of three modes: • Conveyance to the clinical office by the study participant. Again, transportation is normally at ambient temperature. This approach may reduce costs slightly, facilitate more rapid transfer of the sample, and reduce the chances of the sample being lost or damaged in transit. • Collection by study staff. This may facilitate maintenance of samples in a more controlled environment (e.g., frozen) but is more costly than the other two methods. • Shipping by mail or courier service. In most cases, samples are shipped at ambient temperature. This is acceptable for many analytes. Where necessary (usually in warmer climates), cold packs can be included to maintain a cool temperature during overnight shipments and avoid sample degradation. However, when time is a critical factor (e.g., sperm motility as a part of semen analysis), a courier may be the only (and more costly) alternative. In terms of biohazard risk, the main pathogens of concern are bloodborne pathogens such as HIV and hepatitis. However, the handlers of any human biospecimens can be at risk of exposure to these or other pathogens. The risk is minimized when the patient brings samples into the clinic. Risk is increased for study workers who visit the subject's home to obtain the sample (e.g., draw blood) and transport it. In these cases the safety of the study worker and participant Mini-Monograph | Home collection of biospecimens Environmental Health Perspectives • VOLUME 112 | NUMBER 1 | January 2004 (or his/her family) is increased by following validated stepwise procedures designed to reduce the chances of sample exposure. The collection of liquid samples presents the greatest risk to those who collect and transport them, and there should be established protocols for carrying equipment and samples. For example, all equipment and samples should be transported in a sturdy lockable container with specimens inside sealed in a secondary container displaying a biohazard label. The risk of accidental exposure becomes higher when biological samples, particularly in liquid form, are shipped via courier or postal service. Indeed, biological specimens should not be sent by courier or through the mail unless they are contained in an approved shipping container. In the United States, this means that the containers must comply with U.S. Postal Service regulations (USPS 1999), Centers for Disease Control and Prevention regulation 42 CFR72 (CDC 1999), and Department of Transportation regulation 67 FR 53118 (DOT 2002), all of which address the shipment of clinical specimens. Various companies produce such containers. Doxtech (Beaverton, OR; http://www.doxtech.com/), for example, produces a high security container for collection, transport and storage of specimens for "drug testing, forensic evidence, food samples and potable water samples." Quorpak (Bridgeville, PA; http://www.qorpak.com/) produces a two-part specimen mailer manufactured to meet government regulations for mailing liquids in glass or plastic. The outer container has a fiberboard body with a waxed inside liner, and the metal neck is securely crimped to the fiber body. The inner container, which holds the specimen, is a high-density polyethylene bottle with a polypropylene closure and polyethylene foam liner. The increasing number of biospecimens being sent through the mail has even prompted some postal agencies to design special singleuse containers specifically for this purpose. The Royal Mail Service in the United Kingdom recently launched Safebox, a prepaid packaging concept for medical, veterinary, and pharmaceutical samples (First Filtration International, Banglamung, Thailand; http://www.firstfiltration.com/safebox.html). Safebox is a tough plastic container that is delivered by the Royal Mail as part of a normal postal delivery. It is opened at the laboratory by pulling off a tear strip and because the inner chamber is transparent, any leakage can be detected without risk of physical contact. If leakage does occur, it is absorbed by the Aqui-Pak system (First Filtration International), a protective packaging material for the transport of pathological specimens. Aqui-Pak complies fully with secondary packaging instructions issued by the USPS and Royal Mail. Finally, because the Safebox is designed for single use only, there is no risk of cross-contamination from packaging materials being reused. In some cases field specimens may be originate outside the country where the analytical laboratory is based. Since the World Trade Center tragedy, the international transportation of hazardous or potentially hazardous agents has come under increased scrutiny. Given that all human samples are considered potentially hazardous, investigators are responsible for adhering to packaging and shipping standards issued by the International Air Transport Association (IATA). The IATA produces both regulation and training literature for dangerous goods, and various IATA-endorsed training programs are offered by companies specializing in the provision of such training. Conclusions Though there are a number of caveats, the use of home-based collection of biospecimens in epidemiologic studies, including those focusing on sensitive reproductive end points, should assist investigators in obtaining high participation rates and thereby minimize misclassification bias regarding either exposure or effect. This strategy, in addition to all the other advantages discussed, helps to ensure valid study conclusions while filling critical data gaps. For example, couple-based prospective pregnancy studies designed to assess effects of parentally mediated exposures before, at, or shortly after conception could benefit from home-based biospecimen collection, which would make it easier to obtain samples at specific times over the course of menstrual cycles, pregnancy, and lactation. Such approaches offer promise for ascertaining data to fill critical data gaps such as the toxicokinetics of environmental contaminants across pregnancy and lactation as well as their effects, if any, on sensitive markers of human reproduction and development. Urine and blood continue to lead the way as the most popular biospecimens obtained in epidemiologic studies. This is mainly because they are both accessible and informative for a large number of clinically important parameters. The methodology for collecting, storing, and transporting these types of sample is also well established for measuring a number of parameters. Urine particularly appears to offer a good way forward, as it is relatively easy to collect from all age groups, and for many analytes the sample can be stored from room temperature to -20°C, conditions that can be accommodated in most residential dwellings. However, in terms of home-based collection, blood is somewhat problematic in that realistically it takes visits by trained phlebotomists to obtain samples. The only benefit this might offer is convenience for study participants, which might increase participation rates/decrease drop-out rates in epidemiologic studies. Although urine and blood have formed the mainstay of biospecimen collection in most field-based epidemiologic studies to date, a number of other accessible biological samples can provide useful and complementary information on a wide range of physiologic indicators and toxicant exposures. Those receiving the most attention include saliva and semen. Studies have shown that these can provide information on hormone levels (saliva) as well as information on testicular function and chemical exposures (semen). These samples are easy to collect and can be stored in the home environment. Samples that are accessible and potentially informative but have received relatively little attention include breast milk, hair follicles, buccal cells, nail clippings, and vaginal swabs. Breast milk appears to be an obvious biospecimen for analysis in studies such as the National Children's Study that focus on child health. It can be used to monitor body burdens in reproductive-age women and it estimates in utero and nursing-infant exposures. However, the range of biochemical targets that can be robustly measured in milk, the sensitivity of milk-based assays, and the effects of storage on such sensitivity have only recently started to be properly assessed. This situation is much the same for hair follicles and nail clippings, whose main use may be to provide nucleic acid for gene expression profiling and gene polymorphism analysis, respectively. It appears unlikely that nail clippings will displace buccal cells as convenient sources of DNA for sequencing and polymorphism analysis in studies of older children and adults, as a substantial body of literature suggests that buccal cells can be conveniently collected, stored, and transported from the home environment and provide high yields of goodquality DNA. However, for studies of infants and young children, the nail-clipping method may be an ideal alternative. Though home collection of biospecimens appears to offer advantages over clinic-based studies in terms of participation rates and reduced costs in certain circumstances, definitive assessments of cost-benefit compared with clinic-based studies are needed to verify this assumption Also needed are formal assessments of collection, storage, and other quality control issues associated with home collection, particularly for the less-studied specimens such as nails, buccal cells, or vaginal swabs. When suitable operating procedures have been defined, the integration of a wider range of biospecimens than has been the case thus far has the potential to enhance the robustness of epidemiologic studies by helping to characterize the causes of adverse outcomes and facilitate new approaches to identifying biomarkers of exposure, effect, and disease development.
A Questionnaire Survey of the Type of Support Required by Yogo Teachers to Effectively Manage Students Suspected of Having an Eating Disorder Background Many studies have focused on the decreasing age of onset of eating disorders (EDs). Because school-age children with EDs are likely to suffer worse physical effects than adults, early detection and appropriate support are important. The cooperation of Yogo teachers is essential in helping these students to find appropriate care. To assist Yogo teachers, it is helpful to clarify the encounter rates (the proportion of Yogo teachers who have encountered ED students) and kinds of requested support (which Yogo teachers felt necessary to support ED students). There are no studies that have surveyed the prevalence rates of ED children by ED type as defined by the Diagnostic and Statistical Manual of Mental Disorders, 5th edition (DSM-5), nor were we able to find any quantitative study surveying the kinds of support Yogo teachers feel helpful to support ED students. Methods A questionnaire survey was administered to 655 Yogo teachers working at elementary/junior high/senior high/special needs schools in Chiba Prefecture. The questionnaire asked if the respondents had encountered students with each of the ED types described in DSM-5 (anorexia nervosa (AN), bulimia nervosa (BN), binge eating disorder (BED), avoidant/restrictive food intake disorder (ARFID), and other types of EDs (Others)), and the kinds of support they felt necessary to support these students. The encounter rates and the kinds of requested were obtained and compared, taking their confidence intervals into consideration. Results The encounter rates for AN, BN, BED, ARFID, and Others were 48.4, 14.0, 8.4, 10.7, and 4.6 %, respectively. When classified by school type, AN, BN, BED, and ARFID had their highest encounter rates in senior high schools. Special needs schools had the highest rate for Others. The support most required for all ED types was “a list of medical/consultation institutions.” Conclusions Our results have clarified how to support Yogo teachers in the early detection and support of ED students. We found that the encounter rate of AN was the highest, and that it is effective to offer “a list of medical/consultation institutions” to junior and senior high schools where the encounter rates for AN are high. Background Eating disorders (EDs) can be thought of as abnormalities in eating behavior formed by close interactions among psychological, physical and mental factors [1][2][3][4]. Among EDs, anorexia nervosa (AN) is usually accompanied by a sudden loss of weight, as well as other kinds of mental diseases and behavioral disorders, and it can be chronic and severe [5][6][7][8]. Its mortality rate is extremely high, at 7 % in Japan. With respect to age, AN prevails among teens, while bulimia nervosa (BN) is more common among people in their twenties, and the proportions of teens in the estimated onset ages of EDs are increasing yearly. There have been many studies focusing on the lowering age of onset of EDs outside Japan [9][10][11][12][13]. Some have examined the prevalence rates of early onset EDs (EOEDs) in children aged from 5 to 13 [14], the prevalence rates by ED type in students aged 10 and up [15], lifelong prevalence rates and onset ages by ED type [16], and early signs of ED symptoms in nine-year-olds [17]. However, as epidemiological surveys of medical institutions could undercount these prevalence rates, actual condition surveys of schools and communities are needed. Thus, there have been a number of recent surveys of Yogo teachers in Japan [18][19][20][21]. Yogo teachers are unique teachers in Japan. They measure height and weight, and review medical records for each student. They not only take care of students like school nurses, but are also in charge of health education. They stay at school on weekdays, teach courses of adolescent health and have contact with parents. They watch both the mental and physical health of students through body measurements, daily health observation and information from other teachers or students. As they are not nurses, only some of them have nurse-licenses [18]. They play the role of gatekeepers screening out students that may have diseases and refer them to school physicians. Many of these surveys have also noted the lowering age of onset of EDs: one study reports a third grader with AN, and another found that AN prevails among younger students than previously thought. However, there have been few epidemiological surveys to date on the actual condition of EDs in elementary schools. Students with EDs are likely to suffer worse physical effects than adults. Therefore, it is crucial to identify and support ED students as early as possible, beginning even at the level of elementary school. Although school physicians in compliance with Japanese law do medical check-ups and recommend that students suspected of having EDs consult medical institutions, ED students (especially those with AN) strongly resist seeking treatment [22]. This suggests that it is crucial to obtain help from Yogo teachers in leading ED students to consultation. In that respect, Yogo teachers are in a very advantageous position for the early detection and support of ED students, since they regularly measure all students' bodies and oversee the daily health of the whole school. One previous study highlighted that general members of the school staff are in the best position for identifying early signs of EDs and supporting ED students [23]. In order to improve the present situation, in which one half to one third of AN students do not visit a medical institution [18], it is necessary for Yogo teachers to be able to offer effective methods of support to ED students. However, Yogo teachers generally have a heavy work-load, as they not only take care of the mental and physical health of the students, but also teach courses of adolescent health and liaise with parents. Therefore, this leaves little time to support ED students, and support for the Yogo teachers themselves is needed. To examine this situation, we surveyed encounter rates and needs. The "encounter rate" is defined as the proportion of Yogo teachers out of all Yogo teachers who have encountered ED students. By surveying this rate by school and ED type, effective support can be appropriately implemented (e.g., extra support for school types with a high encounter rate). We used an encounter rate instead of a prevalence rate, since what we wanted was not "type of EDs diagnosed by Yogo teachers" but "type of EDs doubted by Yogo teachers." Moreover, the prevalence rate not only tends to appear lower than actual, but also is hard to obtain (as it is difficult for medical institutions to gain access to schools in Japan). There have been many surveys of prevalence rates of EDs in students [15,16,[18][19][20], but to the best of our knowledge, there have been no surveys to date on encounter rates among Yogo teachers. Moreover, previous studies on prevalence rates were based on DSM-IV diagnosis criteria, and rates by ED-type based on the newer criteria described in DSM- 5 have not yet been surveyed. The ED types described in DSM-5 are AN, BN, binge eating disorder (BED), avoidant/restrictive food intake disorder (ARFID), and other EDs such as rumination disorder or pica (Others). DSM diagnosis criteria of the American Psychiatric Association were revised in 2013, and "Eating Disorders" and "Feeding and Eating Disorders in Infancy or Early Childhood" in DSM-IV were integrated into a new diagnosis category "Eating Behavior Disorders and Eating Disorders." This can be attributed to the fact that there was no need at the time to separate infancy or early childhood from other ages, as it was widely recognized that EDs, previously believed to emerge mainly in infancy or early childhood, actually emerged at later ages. However, the prevalence rates of EDs are reported to be higher in DSM-5 than in DSM-IV-TR [24,25] because DSM-5 has increased the number of symptom names and offers more flexible diagnostic criteria [26][27][28][29][30]. Furthermore, one study [31] reports that ARFID prevails among younger students than AN or BN. These facts indicate that a new survey by ED type based on DSM-5 is necessary. It is also necessary to clarify the needs of Yogo teachers; i.e., what kind of support they require to support ED students. However, while there have been qualitative studies that survey their needs, to the best of our knowledge, there has not yet been a quantitative one. Therefore, the objective of the present study was to gather fundamental data that would be effective in supporting Yogo teachers in their early detection and support of ED students. More specifically, we carried out questionnaire surveys of Yogo teachers at several different levels of school to determine the prevalence of each DSM-5 ED type, and clarified the proportion of Yogo teachers who had encountered ED students (encounter rate), and the kinds of support they needed to effectively support ED students (requested support). Sample selection The subjects were Yogo teachers working at elementary, junior high, senior high, and special needs schools in Chiba Prefecture, Japan. Chiba Prefecture is located in the national capital region, and has Chiba City (one of the ordinance-designated cities, with population of more than half million) and Narita International Airport. Its population is 6.2 million, which came 6th in Japanese 47 prefectures. Ethics statement The study was approved by the Ethics Committee of Chiba University. We explained our research to the educational committees of Chiba Prefecture and cities, school principals, the head of the Yogo teacher association, and the Yogo teachers through both written and oral descriptions. All participants gave their written informed consent. The questionnaire was anonymous and self-completed, and included the statement that all responses were entirely voluntary. Study procedures The questionnaire asked about the demographic characteristics of the Yogo teachers, the features of their schools, their rates of encounter of ED students, their needs for support for those students. It also included an explanation of the DSM-5 categories and criteria, and the opportunity for the respondent to freely write his or her impressions of the survey, requests, etc. The question items were created by the Early Finding Working Group Committee. The demographic items included age, gender, nursing experience, years of experience as a Yogo teacher, and school type (elementary, junior high, senior high, or special needs). School feature items included gender type (boys' , girls' , or coeducational school), number of students, and location of the school. Encounter rate questions appeared in the yes/no format and asked if the respondent had encountered ED students by ED type. Yogo teachers were also asked how necessary they considered eight different kinds of support (very necessary, rather necessary, not very necessary, not necessary). The questionnaires were distributed to 1,272 Yogo teachers of the specified types of schools in Chiba Prefecture during a Chiba Prefecture Yogo teacher seminar held in January of 2015, and were completed and collected on the spot (1272 covered 78.8 % of 1615, the total number of Yogo teachers of Chiba Prefecture). We then carried out the following analyses: (1)A calculation of encounter rates for all ED types by dividing the number of Yogo teachers who had encountered each type of ED student by the total number of Yogo teachers who completed the survey. Note that the encounter rate is not a prevalence rate, but a proportion of Yogo teachers who had experience of taking care of or meeting students with EDs. (2)List kinds of requested support for all ED types by descending order of the number of Yogo teachers answering "very necessary" in each category. Note that the number of all Yogo teachers included missing values. The statistical analysis software SPSS Ver. 21.0 (IBM, Tokyo, Japan) was used for all analyses. Demographics of the participants From the 1,272 Yogo teachers to whom questionnaires were distributed, 655 responses were obtained (effective response rate 51.5 %). The appropriateness of this sample size was examined using a 95 % confidence interval (CI) of the encounter rate, which can be expressed as [p' -1.96√(pq/n), p' + 1.96√(pq/n)] where n is the sample size, p' is the sample proportion, p is the population proportion and q is 1-p. When the interval is set within 5 %, the minimum required sample size is calculated as 385 (1.96√(pq/n) < 1.96√(.5*.5/n) < .05, n > {1.96*√(.5*.5/ .05)} 2 = 384.16). Furthermore, previous studies on Yogo teachers had sample sizes of 150 [20] and 391 [21]; thus, the present sample size of 655 was judged to be sufficient. The demographic characteristics of the responding Yogo teachers are shown in Table 1. All were female, and 62.0 % were in their forties or older. 93.6 % lacked nursing experience, and 51.2 % had worked as a Yogo teacher for at least 20 years. 59.4 % worked at an elementary school, 28.9 % at a junior high school, 9.3 at a senior high school, and 2.4 % at a special needs school. The features of the schools are shown in Table 2. All schools were coeducational. Most elementary and junior high schools had 201~400 students and were located in local core cities. Most senior high schools had 8011 000 students and were located in suburbs. Table 4 shows the encounter rates by ED and school type. For AN, the encounter rate was highest in senior high school at 80.3, and junior high school came next at 67.7 %. For BN, senior high school was highest at 50.8 %. For BED, senior high school was again highest at 27.9 %, followed by special needs school and junior high school at 12.5 and 11.1 %, respectively. For ARFID, senior high school was highest at 14.8 %, followed by junior high school, elementary school and special needs school at 11.1, 10.0, and 6.3 %, respectively. For Others, special needs school was highest at 31.3 %. Table 5 shows the requested support by the descending order of the number of Yogo teachers who had encountered ED students (by ED type) and felt that the listed support was "very necessary." The type of support considered most necessary was "a list of medical/consultation institutions" for all ED types at 66.7~85.5 %. For Others, note that "a list of medical/consultation institutions" may be switched with "education of teachers" or "advice from medical/consultation institutions" as their CIs overlapped. Table 6 shows requested support by school type. The support most requested by Yogo teachers of elementary, junior high, and senior high schools was "a list of medical/consultation institutions" at 73.5, 82.4, and 82.2 % respectively. The support most requested by special needs schools was "education of teachers" at 64.7 %, followed by "a list of medical/consultation institutions" and "advice from medical/consultation institutions" at 58.8 and 58.8 %, respectively. Note that this order may be reversed as CIs overlapped. Free-description A free-description section contained some valuable writings regarding tools or information for screening ED students. A number of Yogo teachers reported using growth curves, or information from other teachers, students and parents (such as their seeing of suspicious students' vomiting) for that purpose. Regarding screening AN students, Yogo teachers reported comments such as "I used growth curves," "I wish there was a method of finding EDs from growth curves promptly." Regarding BNs, Yogo teachers reported "I saw some students vomit after lunch." "I heard information of students' vomiting from other students or teachers." Regarding ARFIDs, "I was asked to respond to students avoiding taking lunch," "ARFID is more common among boys." Discussion The objective of the present study was to gather fundamental data to clarify what types of support might be effective in aiding Yogo teachers with the early detection and support of ED students. Specifically, we conducted a questionnaire survey of Yogo teachers working at elementary, junior high, senior high, and special needs schools, and clarified the proportion of Yogo teachers who had encountered ED students (encounter rate), and the kinds of support they felt they needed to support those students (support). EDs were divided into 5 different categories based on DSM-5 criteria. The ED type with the highest encounter rate was AN. While some previous studies on students have reported that the prevalence of Eating Disorder Not Otherwise Specified (EDNOS) was higher than that of AN [15,16] outside Japan, other research on Yogo teachers has found that AN is more common than BN in Japan. This difference may be explained by the fact that Japanese law required Yogo teachers to draw growth curves, which enabled them to detect ANs easily as described in a freedescription section. BMI is frequently used in surveys of EDs [32][33][34][35][36], but is not common in Japan. As Yogo teachers are busy with not only taking care of students but also teaching classes, and do not have much time to calculate them. Instead, they prefer to use growth curves. The present results show that as many as half (48.4 %) of Yogo teachers have encountered AN students. Thus, it may be effective to assist Yogo teachers with support intended primarily for students with AN. England has General Practitioners (GPs) and Child and Adolescent Mental Health Services (CAMHS) outside school, as well as guidelines and a flowchart to help ED students find appropriate medical institutions in school [37]. In contrast, Japan has no concrete support for ED students in school. The present study revealed that the most necessary support for Yogo teachers who had encountered students with AN was "a list of medical/consultation institutions." Moreover, a previous study that asked Yogo teachers what they wanted to know about EDs reported that "a way of cooperating with medical institutions" was highly needed. These facts indicate that it is necessary to offer Yogo teachers, especially those dealing with AN, "a medical/consultation institution list," which would connect schools and medical institutions. Next, the encounter rates of AN were compared by school type in order to examine which types of school should be supported as a first priority. A questionnaire survey of Yogo teachers based on DSM-IV reported that the numbers of students reported as "too thin" and "eat and vomit" in senior high school were higher than those in junior high school. The present results also showed that the encounter rate in senior high school was higher than that in junior high school. These facts indicate that it may be effective to offer "a list of medical/consultation institutions" to senior high schools when supporting AN students. BN had the second highest encounter rate at 14.0 %. This may be attributed to the fact that Yogo teachers often saw students vomit, or obtained information of vomiting students from other students or teachers as mentioned in a free description section. For the support needed, Yogo teachers again reported that the most needed was "a list of medical/consultation institutions." The rate was highest in senior high schools. Similarly, the support most requested by Yogo teachers who had encountered students with ARFID was "a list of medical/consultation institutions." The encounter rates for all school types were at the same level. One previous report found that ARFID is "a phenomenon seen not only in infants less than seven years old or in early childhood, but also in a wider range of ages such as school age or prepuberty" [31]. In addition, comments in a free description section revealed that Yogo teachers frequently saw ARFID students avoid taking lunch in school, and were required to respond their likes and dislikes of food by parents or teachers. These facts imply that it is necessary to support ARFID from elementary school age, and assist Yogo teachers with a list of medical/consultation institutions and support for handling food dislikes. It is essential to offer ED support, as it is considered to be highly needed by Yogo teachers at each type of school, and the type of support most overwhelmingly requested by Yogo teachers who had encountered students with EDs was "a list of medical/consultation institutions." A previous study states that the lowering age of onset of EDs and prolongation are major problems of EDs. We therefore believe that it is important to offer "a list of medical/consultation institutions" at the elementary school level, and continue to provide support to ED students throughout their school years. The types of support required by Yogo teachers of special needs schools who had encountered EDs were "education of teachers," "a list of medical/consultation institutions" and "advice from medical/consultation institutions." Special needs schools are specialized schools that accept students with developmental disorders. Some studies have noted that students with developmental disorders tend to develop EDs [38,39]. Thus, it may be useful to offer the support requested by Yogo teachers of special needs schools. Lastly, we discuss the validity of using encounter rate. We obtained the proportion of Yogo teachers who had encountered EDs and not the actual proportion of ED students, prevalence. In some cases, Yogo teachers may not be able to detect EDs or differentiate ED types, either because they are not familiar with them or ED students avoid visiting Yogo teachers. This suggests that using the prevalence rate may be more appropriate. However, the prevalence is not lack of flaws. It is hard to obtain since it is difficult for medical institutions to obtain access to schools in Japan. School physicians may do so, but they seldom go to school and their surveys may not be timely. Comparing with them, Yogo teachers stay schools on daily basis, evaluating the health condition of students real time. Moreover, as our purpose was to clarify the needs of Yogo teacher who deal with EDs, what we needed may not necessarily be "an accurate proportion of ED students diagnosed by doctors", and "a proportion of ED students doubted by Yogo teachers" was enough for the purpose. Therefore, we concluded that using the encounter rate was valid in this survey. Limitations and recommendations One of the limitations of the present study is that the sample might be biased as all participants were only selected from only one prefecture, Chiba Prefecture. In addition, the sample sizes of the special needs schools and Others were small, which made CIs long and estimation errors large. By increasing the sample size and selecting participants from various prefectures, it will be possible to compare the encounter and prevalence rates of multiple prefectures. We believe that this would lead to better support for Yogo teachers in finding and supporting ED students. Moreover, as we did not ask Yogo teachers to calculate BMIs or draw growth charts for screening ED students, our results may be regarded as lacking in objectivity. However as, our purpose was not to investigate the objective prevalence, but to clarify the needs of Yogo teachers in supporting EDs, asking them to use such tools seems unnecessary. Conclusion The present study was a questionnaire survey of Yogo teachers at elementary, junior high, senior high, and special needs schools in Chiba Prefecture, Japan. We calculated the encounter rates and the requested support by school and by ED type. Our results showed that the order of encounter rates was AN > BN > ARFID, and that it might be effective to offer a "medical/consultation institution list" for the early detection and support of ED students for all ED types. By individual ED type, it was found that it might be effective to offer support to Yogo teachers in senior and junior high schools for AN, in senior high schools for BN, and in all school types for ARFID. n number of school nurses that considered the given type of support "very necessary" on medical treatment systems for EDs from the Japanese Ministry of Health, Labor and Welfare. Author details
Gas-jet synthesis of diamond The nonequilibrium processes in flows of gas mixtures on the way to surface, where diamond structures formation takes place, are discussed. The main attention is focused on processes with thermal activation. Thermocatalitical phenomena in the collision of hydrogen and methane molecules with tungsten, nonequilibrium processes during transportation of active components in channels and on the way to the substrate, formation of gas atmosphere immediately close to the surface of diamond formation – all these processes relate to the field of modern physical mechanics. This statement can be related to the diamond synthesis from microwave plasma which is determined by generation of plasma from high frequency radiation and in most cases diffusion interaction of plasma with surface of deposition. The main content of studies in the Institute of Thermophysics belongs to a new direction of research, to synthesis of diamond from high velocity flow of gas mixtures or plasma. Introduction Diamond possesses extreme properties of hardness, thermal conductivity, frictionality, electric resistance, optical transparency, hole conductivity, chemical inertness, and biological compatibility. The possibility of gas-phase synthesis of diamond in the film form has significantly extended diamond application for surface modification, including complex-shaped surfaces. Approximately 97% of all diamonds used in industry all over the world are synthetic diamonds. Gas-phase methods of mono-, poly-, micro-, nano-, and ultrananocrystalline diamond synthesis are vigorously developed. The priority of the development of artificial diamond synthesis methods undoubtedly belongs to Russian scientists. Back in 1939, the journal "Uspekhi Khimii" (Achievements in Chemistry) published a paper of O.I. Leipunskii, which contained a phase diagram of carbon that predicted a possibility of diamond synthesis at high pressures and high temperatures. Diamond deposition from the gas (vapor) state allows synthesis of diamond films with the thickness varying from nanometers to the thickness of structural parts. The properties of the synthesized product can be widely varied depending on the gases used and specific features of technologies. This variety is caused by the difference in the methods of activation of precursor gases: activation on the surface of a hot wire, on an extended hot surface, in the plasma of a microwave discharge, in the electric arc plasma, in plasmas of various discharges, in detonation shock waves, and in the flame. Besides, carbon as atom can be delivered in different gases. The early stages of development of gas-phase methods revealed an important role of atomic hydrogen as an active agent in preparing carbon bonds to constructing the diamond structure. An example of the chain of modification of carbon bonds with participation of Н and СН 3 fragments (figure 1) was published in [1] for the case of the chemical process of diamond formation on the (110) face. The formation of a 2 1234567890 ''"" new C-C bond in six steps can be clearly seen. This scheme turned out to be popular, resulting in recognition of the leading role of the methyl group СН 3 as a carbon-containing fragment. In reality the channels of formation of diamond structures are versatile; one of the reasons is specific surface chemistry for each of the typical faces (100, 110, and 111). As it follows from numerous modern investigations, diamond synthesis can include С, СН 3, С 2 Н, С 2 , and other ones as active fragments. Synthesis without hydrogen participation is also possible. Figure 2 shows the relative positions of the free energy levels for graphite and diamond. Under standard conditions, graphite is in the stable state, and diamond, being stable, is a metastable material, whose free energy differs from that of graphite by 2.9 kJ/mol. Diamond growth at low pressures looks as a paradox (as violation of the second law of thermodynamics). The question is how metastable phase formation at standard temperature and pressure arises. This can be understood by considering the phase diagram of carbon ( figure 3). The domains G and D correspond to the stable equilibrium states of graphite and diamond, respectively. At low background pressures, the high internal pressure induced by surface tension provokes generation of conditions for nucleation of diamond structures in the domain D. The curve in the insert (pressure as a function of the cluster radius) shows that this pressure can reach several gigapascals. Consequently, even at low ambient pressures, a transition through the boundary of the domains G and D may occur in the cluster. The co-existence of different phases blurs the "equilibrium" boundary. Is it possible to resolve the conflict with thermodynamic laws for various ways of diamond structure formation? Indeed, in the case of deposition from the gas phase with activation on hot surfaces, in the microwave plasma, electric arc plasma, and in the flame, conditions for diamond synthesis can be formed due to the existence of a cluster structure, which was discussed in the description of figure 3. Mechanical interpretation of chemical kinetics should have an energy-based justification. Diamond formation during detonation can be attributed to conditions of high pressure and temperature at microscales. Thermal activation and interaction of С x Н y , Н, and Н 2 fragments with surfaces of forming diamond structures The possibilities of activation of the most popular mixture Н 2 + СН 4 on hot surfaces were considered in much detail in [2]. The boundary conditions responsible for the flow of atomic hydrogen from the surface to the gas have a rather complicated nature, which has not been adequately studied yet. The associated processes are dissociation of hydrogen molecules due to their collisions with the surface, hydrogen adsorption with subsequent decomposition of molecules and desorption of atomic fragments, collision of atoms with the surface and subsequent reflection after accommodation or adsorption, recombination of atoms of various origin, dissolution of hydrogen in metals, and, finally, excitation of internal energy of hydrogen molecules during their collisions with the surface. The description of hydrogen-tungsten interaction in a wide range of temperatures is complicated by a strong dependence on the surface state, in particular, on the type of the crystal face with which the particle collides and on the state of adsorbed particles, which, in turn, depends on temperature and pressure. The problem is somewhat less difficult for high temperatures (above 2000 °С) and high vacuum: in this case, the surfaces are cleaner. It should be noted that the boundaries between different surface states are rather blurred. Apparently, the first significant contribution to studying hydrogen-tungsten interaction was made by Langmuir [3,4]. He established the existence of a chemosorption process of hydrogen decomposition on the hot tungsten surface and derived the adsorption law in the form of an adsorption isotherm, which was named after him. The tungsten activity during interaction with Н 2 + СН 4 and Н 2 + С 2 Н 2 mixtures was studied in [5]; it was shown that the tungsten surface at low pressures (within 1200 Pa) in the 1.5% С 2 Н 2 + Н 2 mixture up to temperatures of about 2600 K is carbonized to such an extent that the emissivity reaches value approximately 0.85. At temperatures above 2600 K, it decreases to 0.5, i.e., to the level of emissivity from the pure tungsten surface. Fundamental results on hydrogen interaction with the tungsten surface were obtained by molecular beam measurements in [6]. The degree of hydrogen dissociation was determined for the case of molecule collisions with the surface at temperatures from 1800 to 3000 K. The measured accommodation coefficients reported in publications are usually given for low temperatures. Therefore, their application for intensely heated and cleaned surfaces is not justified. The data on the accommodation coefficients of hydrogen molecules and atoms colliding with the tungsten surface are rather contradictory [7]. The reason is a strong dependence of accommodation coefficients on the adsorption coating. Interaction of СН 4 molecules with the tungsten surface leads to formation of carbides, carbon dissolution in the volume, adsorption of atomic hydrogen, and its desorption in the atomic or molecular form. There are practically no data on the translational, rotational, and vibrational energies of reflected СН 4 molecules. At high temperatures (~2200 K and higher), adsorbed molecules are subjected to fragmentation into С and Н with a such small lifetime on the surface that desorption can be considered as an instantaneous process. There is also little information about accommodation of reflected molecules at high temperatures. Thus, in the range of high temperatures of the activating surface, the flow of СН 4 molecules past an extended surface transforms to a flow of excited СН 4 molecules and also С and Н fragments. However, the sticking coefficient of СН 4 to the tungsten surface is very small [2,8]. Therefore, one has to conclude that active fragments for diamond synthesis except for diamond synthesis are formed either in the flow of molecules to the substrate or during interaction with the substrate. The low sticking coefficient of СН 4 to the surface almost excludes the channel of direct formation of the methyl group СН 3 due to СН 4 collision with the diamond surface. The flow analysis of experiments with the thermal plasma definitely reveals a large set of fragments participating in diamond synthesis except for the methyl group. For example, the measured compositions of the flow in the plasma experiments [9] turned out to contain many С, С 2 , and СН fragments, which are active agents of diamond synthesis. Flows of individual fragments reaching the substrate surface where diamond synthesis occurs depend not only on the composition of gases incoming from the reactor (or the source of precursor gases), but also on the processes of particle interaction with the substrate surface. The latter are determined by the translational velocity of particles, their internal energy, the substrate temperature, blocking of carbon sites by hydrogen compounds, adsorption of particles, their accommodation, crystal face nature, and crystal lattice defects. These factors are interrelated. In estimating the atmosphere composition, one should bear in mind that the fraction of deposited atoms in the formed structure is negligibly small as compared to the initial flow. For this reason, the computational analysis of the gas composition by direct simulation Monte Carlo (DSMC) method [10] is extremely valuable, even if it involves approximate estimates for the adsorption and recombination probabilities and the accommodation coefficients. A certain optimal precursor or an optimal set of precursor fragments corresponds to each method and, moreover, each particular combination of diamond synthesis conditions. In this aspect, data on sticking coefficients ( w ) for molecules of various hydrocarbons to the diamond faces (100) and (111) calculated by the molecular dynamics method in [11] are very interesting. The sticking coefficients for the gas temperature of 2120 K and surface temperatures from 800 to 1100 K are listed in table 1. It is of interest that the sticking coefficient for molecules with one and two carbon atoms increases from zero to unity as the number of hydrogen atoms decreases. The minimum value of  w for methane is explained by the absence of free electrons for bonding. The same factor is responsible for the low value of  w for ethylene. These results emphasize the importance of the presence of fragments interacting with diamond in the flow: С, СН, СН 2 , СН 3 , C 2 , С 2 Н, С 2 Н 2 , С 2 Н 4 , and С 2 Н 5 . Von Keudell et al. [12] reported the results of a well-arranged experiment aimed at studying interaction of methyl radicals and atomic hydrogen with an amorphous hydrogenated carbon film, where they found that the presence of a sufficiently intense flow of atomic hydrogen increases the sticking coefficient of the CH 3 molecule approximately by two orders of magnitude (!!) from 10 -4 to 10 -2 . This occurs because atomic hydrogen opens sites on carbon atoms on the surface for deposition of СН 3 molecules. The synergy effect of combined injection of CH 3 and H, which was detected qualitatively and quantitatively, demonstrates the importance of the mutual influence of individual fragments of precursors in the mixture on deposition of carbon structures. It can be concluded that it is necessary to be cautious in forming the mixture of precursor gases transported to the substrate. In molecular dynamics calculations, the characteristic time of particle collision with the surface is taken to be of order of 1 ps. The time step is from 10 -4 down to 10 -6 ps. Heterogeneous reactions of hydrogen atoms and CH 3 radicals on the diamond surface were studied in [13] in the diamond temperature range from 300 to 1100 K at pressures in the interval 130 < p < 260 Pa. As the diamond temperature increases from 500 to 1100 K, the probability of atomic hydrogen adsorption increases from 6х10 -3 to 4х10 -1 , and that for CН 3 increases from 5х10 -5 to 5х10 -2 . In recent years, the gas-jet modification of the diamond deposition method based on thermal activation has been developed at the Institute of Thermophysics of the Siberian Branch of the Russian Academy of Sciences (IT SB RAS). The advantages of this deposition approach by activating the components of a mixture of hydrogen and methane as a result of interaction with the cylindrical surface of tungsten, as well as the problems arising from the use of this method, are described in [14][15][16][17]. A special feature of this approach is the use of heterogeneous dissociation processes that occur during multiple collisions of molecules with a hot surface [18,19]. The research of diamond synthesis after thermal activation faced a serious problem of this activation: graphitization leading to blocking of the catalytic effect and corresponding reduction of the deposition rate. Investigations of using the microwave plasma for diamond deposition have been recently started at IT SB RAS. The idea is to generate a high-velocity flow of the active plasma according to the scheme similar to the existing space thrusters based on microwave energy. A jet generated by such a reactor provides the advantages of the plasmatorch jet and those of electrodeless generation of the pure plasma. Figure 4 shows a photograph of a supersonic flame of the microwave plasma. The first successful experiments were performed. A specific feature of the developed method is the use of a high-velocity plasma jet. Popular methods of diamond deposition from the microwave plasma are based on diffusion transportation of active fragments to the deposition surface. It can be stated that the method developed at IT SB RAS occupies an intermediate position between the traditional methods with activation in the microwave plasma and activation in the electric arc plasma. Numerical simulation of nonequilibrium processes in gas flows In arranging experiments and analyzing their results, it is also important to consider the results of gasdynamic modeling. Formation of diamond and carbon structures is determined by the composition and distribution functions of velocities and energies of fragments in the gaseous ambient medium near the substrate; these fragments are products of nonequilibrium processes in channels with cylindrical bounding surfaces and on the way from the activation channels to the substrate. Gas-dynamic, physical, and chemical processes in a high-velocity flow of the mixture were studied in [20]. An effective method of computational analysis of the flow is the DSMC method. The computations were performed for conditions used in experiments [14][15][16][17]. The estimates showed that decomposition of the initial gases (CH 4 and H 2 ) and further conversion to H and C components of the mixture can be considered in the one-dimensional approximation by solving chemical kinetics equations. Figure 5 shows the computational domain geometry. The CH 4 + H 2 mixture at an initial temperature of 1500 K is injected into a cylindrical channel with a diameter d = 0.003 m and length h + L. The temperature of the hot section 2 is 2400 K. The mixture expands from the channel into a vacuum chamber (domain 3). A substrate for diamond deposition is located at a distance L sub = 0.01 m. Its temperature is 1300 K. Figure 6 shows the results of model calculations with ignored methane decomposition whose gasdynamic role is insignificant. It illustrates the influence of pressure in the deposition chamber on the axial distributions of parameters of H 2 , H, and CH 4 (densities, velocities, and translational temperatures) for the cases of expansion into vacuum and into a medium with an ambient pressure of 20 Torr. The hydrogen flow rate is 25 sccm, and the molar fraction of methane is 3%. Three segments of flow can be clearly defined: initial ("cold") region 0.035 -0.045 m, hot region 0.045 -0.06 m, and region of expansion toward the substrate 0.01 m. The flows have the following typical features: in both cases (in terms of pressure), rapid dissociation of hydrogen in the cylindrical channel at the beginning of the hot region, followed by an equilibrium flow in the cylindrical channel; behind the cylindrical channel, the flow is essentially nonequilibrium in the case of expansion into vacuum and equilibrium at a pressure of 20Torr up to a small distance from the substrate, where the influence of the processes on the boundary is intensely manifested. The particular case considered here is far from being able to characterize all specific features of various flows formed during diamond deposition, but it demonstrates the capabilities of the computational method. Kinetic equations for 11 species were solved in modeling a one-dimensional flow with chemical processes in a mixture of H 2 , H, and C x H y , following to [21,22]. The computations were performed for 13 forward and backward reactions. As a result, it became possible to determine the velocity distribution functions near the deposition surface for all species and, which is particularly important, for the most probable participants of diamond structure formation: Н 2 , H, CH 3 , C, CH 4 , and C 2 H 2 . The amount of atomic hydrogen, which is an active participant of chemical processes in the gas flow and in the synthesis of diamond structure, is characterized by the degree of dissociation K = 0.5n H /(n H2 + 0.5n H ). Here n H2 and n H are the densities of molecular and atomic hydrogen. Dissociation depends on the channel geometry and gas flow parameters. Figure 7 illustrates the degree of dissociation for three pressure values in the expansion channel. The value of K rapidly increases already in the first third of the hot channel and then rapidly decreases near the substrate. Last is determined by a superposition of the flow, radial diffusion, and diffusion of fragments from the substrate surface. The growth of pressure in the chamber leads to flow deceleration in the channel, enhancement of catalytic processes, and an increase in the degree of dissociation toward the channel exit. Important results of the flow analysis are the data on methane decomposition. The analysis of methane chemistry is one of the difficult problems. The results of evaluation of the real situation with some assumptions are shown in figure 8: axial distributions of the molar fractions of nine species (H 2 , H, CH 4 , CH 3 , CH 2 , CH, C, and C 2 H 2 ) under the following conditions: hydrogen flow rate 25 sccm, initial mixture 3% CH 4 fragments are С, СН 3 , and С 2 Н 2 . The maximum concentration of acetylene is smaller than the concentration of atomic carbon approximately by an order of magnitude. The composition of gases transported to the deposition surface and their energy state at the instant of the collision with the surface determine the rate of formation and the quality of the diamond structure. Gas-dynamic modeling with the use of models of chemical transformations and DSMC simulations ensure obtaining the velocity distribution functions and internal energies of all particles participating in interaction with the diamond surface. Further intense investigations are required to study the boundary conditions on the formed surface at the atomic-molecular level. Conclusion The diamond synthesis from vapor (gas) phase is realized by complex influence of nonequilibrium processes of active fragments generation on hot surface with catalytic properties or in plasma, and further transportation in nonequilibrium conditions by diffusion or convection in channels, followed by formation specific nonequilibrium clouds in the vicinity to the surface of diamond structure formation. As concern using the thermal activation, it follows from [2],  that the processes of formation of hydrogen atoms on the surface of tungsten (through adsorption of atoms from the gas phase or formation from the adsorbed H 2 molecules) have not been fully studied;  there is no data on sticking coefficients of hydrogen molecules on the clean surface of tungsten crystal faces 100, 110, etc., besides 001;  data on interaction of molecules CH x (x  4) with tungsten are absent;  data on accommodation coefficient can be used with confidence only for clean surfaces. Nonequilibrium in the wall layer is inherent in any gas phase deposition. To date there is necessity in further investigations, uniting numerical modelling of gas flows by the DSMC simulation and molecular dynamic calculations with experiments, giving information on composition of fragments and their energetic state. The problem is complicated by manifold of conditions determining physical kinetics of diamond structure formation. Acknowledgments Author extends gratitude to M.Yu. Plotnikov for manuscript reading and valuable marks.
Experiences with out-patient hospital service utilisation among older persons in the Asante Akyem North District- Ghana Background Though ageing is not a disease, it has been associated with the occurrence of conditions which require health service utilisation. Ghana’s population is characterised by a steady growth in the number of older adults and previous studies have noted limited levels regarding utilisation by older persons. Methods Thus, this study utilised a qualitative approach to explore older persons’ experiences regarding out-patient hospital service utilisation in the Asante Akyem North District of Ghana. The aim was to generate findings that will guide future policies. Sixteen semi-structured interviews were conducted and thematic analysis executed. The Andersen’s Behavioural Model was used as a guiding framework. Results Medical condition was noted to characterise the need component of utilisation. Also, perceived effects of ageing, beliefs and past health status predisposed an older person to utilise available services. Beliefs were noted to make an older person utilise either orthodox or herbal services. Despite these, family support (in the form of financial assistance), accessibility (health facility, health professional, medication and information) and health care costs either enabled or prevented an older person from utilising services. Despite the existence of the National Health Insurance Scheme, health care costs are high and that delayed utilisation or made others avoid the services altogether. The care processes were noted to be cumbersome and involved long hours; though these features were noted to be absent whilst utilising traditional medicine services and this provides an avenue for further research in assessing patient outcomes associated with traditional medicine usage. These findings might be contributing factors to why other studies identified limited usage of health services among older persons in Ghana. Conclusion Though older persons in the district may feel the need to utilise health services on outpatient basis, the enabling factors (notably finance) appeared to be a driving force to actual utilisation. Thus, more innovative health care financing strategies are needed to enhance the coverage of health services for older persons in the district. mothers and it will be a great challenge to re-focus attention on older persons [4]. Within the Ghanaian context, issues relating to older persons gained governmental attention in the late 1980's when steady increment in the number older persons was noted [5]. Thus, inasmuch as Ghana's population appears youthful, the absolute number of older persons appears to be increasing at a steady rate and this has been projected to increase further [5]. The process of ageing, though not a disease may be associated with various disease conditions which may warrant the utilisation of health services [6]. In terms of meeting the health needs of older persons, the Ghanaian health system has been described as "weak and resource-constrained" [7]. Also, the current hospital system has been noted to be overly associated with acute care; though older persons may be affected with chronic ailments [8]. Consequently, the Ghana Public Health Association (GPHA) has expressed concerns regarding poor utilisation of health care services among older persons and inadequate access to specialised health services [9]. In similar lines, Tawiah [10] has also argued that the greatest challenge of old age is to win the war against degenerative chronic conditions affecting older persons. To this end, the Ghana Statistical Services (GSS) and the Ghana Public Health Association have recommended health care and health policy reforms to meet the needs of older persons [7,9]. In pursuing these reforms, GSS has called for "reintegration of older persons into the process so as to enable them contribute to their own well-being" [7]. This may mean that there is a need to capture their experiences with current health services as these may serve as a basis to understanding how they feel about current services and what their expectations are in relation to health care service utilisation [11]. It is worth noting that some quantitative studies have produced evidence regarding limited levels of health service utilisation among older persons in Ghana (example Exavery et al., [12]), however, a description of their experiences whilst utilising health care remains blurred. Thus it is believed that a thorough description of their experiences can establish an understanding of how well the outpatient hospital services are meeting their needs and how they feel about the services as these can guide future reforms and policies [11]. The Ghanaian health care and health insurance system The approach to health care in Ghana represents a mixture of preventive and curative services. Before 1996, health care delivery was the sole responsibility of the Ministry of Health and its services were basically oriented towards curative services. However in 1996, an act of parliament (ACT 525) established the development of Ghana Health Service which appeared to be more oriented towards preventive services; though focus currently remains heavily on maternal and child health. Healthcare services are provided by hospitals, health centres, maternity homes and chemical shops. However, the well-resourced hospitals or tertiary health facilities are located mostly in the urban places though most of the older persons are in rural areas. In addition, there exist traditional medical services at various locations in the country. As part of the government's effort to ensure equitable access to health in Ghana, it introduced the National Health Insurance Scheme (NHIS) into the health system. The primary goal of the scheme is to increase affordability and utilisation of drugs and health services in general, and among the poor and most vulnerable populations in Ghana. The NHIS is financed from four main sources: a value added tax on goods and services, a portion of social security taxes from formal sector workers, individual premiums, and miscellaneous other funds from investment returns, Parliament, or donors. The 2.5% tax on goods and services, called the National Health Insurance Levy (NHIL), is by far the largest source, comprising about 70% of revenues. Social security taxes account for an additional 23%, premiums for about 5%, and other funds for the remaining 2 %. The NHIS covers outpatient services, including diagnostic testing and surgeries such as hernia repair; some in-patient services, including specialist care, most surgeries, and hospital accommodation (general ward); oral health treatments; all maternity care services, including Caesarean deliveries; emergency care; and, finally, all drugs on the centrallyestablished National Health Insurance Authority (NHIA) Medicines List (NHIA webpage). In order to be enrolled, a Ghanaian citizen is expected to pay a registration fee and a premium (has to be renewed on a yearly basis). However, persons aged 70 years and above, core poor and pregnant women are exempted from paying premiums but must pay the registration fee prior to enrolment. Aside the NHIS, there are various private insurance schemes available to persons in the country. Theoretical framework The Andersen's Behavioural Model of Health Care Utilisation was developed in the late 1960's to assist in understanding why people use or do not use health care services and to measure equitable access to health care [13]. Health care utilisation is seen as an essential stride towards illness management, prevention of diseases and treatment [14]. Health care utilisation has been defined by Andersen's model as interplay of predisposing, enabling and need determinants. Thus, an individual's access to and use of health services is a function of these three determinants or characteristics. A simplified version of the model is presented as Fig. 1 below. Though studies have specified the usefulness of the model, it has been indicated that it offers better explanation for discretionary health behaviours (outpatient care services) than non-discretionary health behaviours such as inpatient care [15]. Similarly, the model has been noted not to be sensitive to the diverse cultural and structural barriers in healthcare among minority groups. Thus, Andersen [15] has suggested the need for careful integration of cultural and structural variables into the model so as to enable the model provide explanations regarding health service utilisation among minority groups. Furthermore, the model offers flexibility in understanding health behaviours and can be applied to the current study as it focused on outpatient service utilisation in the hospital [15]. The model is shown as Fig. 1. Methods The aim of this study was to explore and describe the experiences of older persons regarding outpatient service utilisation in Asante Akyem North District. Study design As this study was oriented towards exploring and describing older persons' experiences with health service utilisation, an in-depth descriptive qualitative approach was appropriate [16]. Waldrop et al., [17] have noted that the in-depth qualitative approach is appropriate to provide textual descriptions of older persons' experiences and useful as health service utilisation experiences among older persons appear to have been minimally explored. Setting The study was conducted within the Asante Akyem North District of the Republic of Ghana; specifically, the Agogo Presbyterian Hospital (a quasi-governmental health care facility). The hospital was selected as it is the largest health facility in the district and is utilised by persons of all ages. The services offered include both preventive and curative care. Curative care services have been divided into medical, surgical, obstetrics, paediatrics and emergency services. Older persons most often utilise medical and surgical care services on either outpatient or inpatient basis [7]. Participant recruitment The purposive sampling approach which involved specifically recruiting persons aged 50 years and over [16] was utilised in this study. Thus, after obtaining ethical clearance at the University of Southampton and the Agogo Presbyterian Hospital, information was sent to the Out-Patient Department (OPD) of the hospital to inform staff about the study. The researcher was provided a seat at the reception. After older persons had been attended to by the physician and were about to leave the hospital, they were met at the exit of the OPD reception by the researcher in order to discuss the study and invite them to participate. The discussions were carried out in the "Akan" dialect (a Ghanaian Language). The contact numbers of older persons who were interested in partaking in the study were obtained from them. After two days of purposively recruiting sixteen (16) older persons, phone calls were carried out from the third day and appointments were scheduled with participants at their own convenience and venue. The medical staffs were informed that in case data saturation was not achieved after interviewing the sixteenth person, a follow up recruitment would take place. However by the end of the sixteenth interview, data saturation was reached as no new information was noted [16]. As no new information was noted after this, the researcher did not return to the hospital to recruit more participants. Moreover as the study had to be completed within a stipulated time frame, the researcher proceeded to continue with data analysis. Data collection As the study aimed to understand and describe older persons' experiences regarding health service utilisation, the words and non-verbal language cues expressed by participants would be helpful. Thus, an interview was appropriate [16]. The interview approach was utilised as it was easier to schedule a meeting with one older person at a time as compared to focus group discussion and this allowed participants to express themselves in the presence of only the researcher [18]. Mason [19] has noted that semi-structured interviews provided deeper and rounded explanations. To this end, an interview guide was developed to obtain data from older persons using a semi-structured approach [20]. Before undertaking the interviews, the guide was piloted among three older persons who were conveniently recruited in the district. However, the findings from those interviews were not included in the actual study. The interviews took place at the participants' home with minimal distraction. Prior to scheduling interviews, an initial discussion had been commenced with participants at the point of recruitment. Thus, participants were familiar with the researcher even before data collection commenced. Upon arrival at the interview venue, further explanations were offered regarding the study and participants were allowed to ask questions. Permission was sought to have the interview recorded. As participants spoke about their experiences, facial expressions were noted in a field diary. As the interview continued, intermittent breaks were provided to enable participants to take water. The interviews lasted between 49 to 65 min. The interview commenced with asking participants how they have been faring and this was followed by obtaining socio-demographic data. Participants were then asked what they perceived as their health needs and followed with how they felt with regards to utilising OPD services in the hospital. Also, participants were asked to describe how well the services were meeting their needs. As the interview proceeded, probes were used to enable in-depth exploration of their experiences. In some instances, an iterative mode of questioning was used to ensure that participants were consistent with their responses [16]. Leading questions and medical jargons were avoided throughout the interview process. At the end of the interview, participants were thanked and informed that all recordings would be transcribed to English and emerging themes discussed with them (as interviews were conducted in the "Akan" dialect). At a later day, a second interview was held with each participant. Feedback was obtained from each participant which further shaped the findings as some "Akan" terms were clarified and re-considered in the analysis process. Data management and analysis In order to provide an in-depth description of older persons' experiences with outpatient service utilisation, thematic analysis was used to generate themes from the data obtained [16]. Thematic analysis involved discovering, interpreting and reporting patterns and clusters of meaning within the data [18,16]. This required working systematically through the transcribed texts and identifying themes that are progressively integrated into higher-order key themes in relation to the research questions [16]. Transcribed data were entered into MS word and exported to NVivo version 10. The analysis proceeded with an understanding of the Health Service Utilisation Model and findings offered support for the model. Methodological rigour Participant validation and prolonged contact with participants helped to ensure that descriptions represented their experiences. Ethical considerations Prior to commencement of the study, ethical clearance was sought and obtained from the Research & Governance Unit of the University of Southampton. Thereafter, the study was registered at the Agogo Presbyterian Hospital. Approval and clearance was then obtained at the hospital prior to commencement of the study. After recruitment, participants were provided with two consent forms to either thumbprint or sign. Socio-demographic characteristics of participants The process of purpose sampling resulted in recruiting older persons aged 50 years or more for the study. A total of sixteen (16) older persons participated in the study. Details of socio-demographic features are presented in Table 1. The majority of the participants are within the ages of 50 to 59 years (n = 6). A greater number of older persons in the study are married (n = 9) and are Christians (n = 11). Experiences with health service utilisation Health service utilisation has been defined by Andersen's behavioural model as interplay of needs, predisposing and enabling determinants. This section discusses themes associated with each determinant. Need themes These determinants specify the reason for seeking healthcare. In this study, medical condition (symptoms and diagnosis) and professional evaluation were identified as determinants that made older persons seek and utilise health care services. Medical condition From the analysis, the symptoms associated with a particular condition made an older person seek and utilise health care services. These symptoms appeared to interfere with the older person's way of usual activities and as such decided to seek medical attention. For some participants, utilising hospital services occurred almost immediately; whilst others tried some home remedies before seeking attention at the hospital. The perceived severity of the symptom was either based on the intensity of the symptoms as experienced by participants or consequences of failing to act on initial symptoms experienced. In some instances, older persons had some knowledge about the complications that may result or had heard others talk about similar symptoms and that made them consider using hospital services: "I had been coughing for three days and I was unable to sleep well but I thought it would pass soon but it did not. I tried lime and honey at home but it just did not work so I went to the hospital. The cough was unbearable and it was disturbing everyone in the room when we slept at night [cups chin in the palm]" (Male, 60-69 years) "I hardly visited the hospital until I started urinating too often which disturbed me a lot. If I continued like that I knew all the water in me would get finished so I had to go to the hospital and see the doctor for help" (Female, 70-79 years) Professional evaluation Subsequently, as older persons felt the need to seek health care, they were evaluated by a health care provider and that made it necessary for an older person to continue using health services. In this regard, participants were informed of the need to attend regular reviews by physicians. This was in the form of monthly or bi-monthly medical appointment to assess progress made, carry out laboratory tests, change or refill medications. Thus, been informed by the physician of the need for reviews appeared to have created a need for continuous outpatient visits: "I used to go the hospital once every month but it has been changed to once every two months. I get my blood pressure drugs for sixty days" (Female, 50-59 years) "They did so many things and the doctor said I should come there every month for check-up but now, I see my doctor once every two months for him to assess me and see if there is any problem. I get medications too" (Male, 83 years) Predisposing themes The themes noted in this section include perceived effects of ageing, beliefs and past health care experiences. Despite the symptoms and conditions associated with advancing age, participants felt the need to maintain good health as they grew older and that created the likelihood of utilising outpatient hospital services even though some participants noted that it was difficult maintaining good health especially as one advanced in age. The difficulty associated with maintaining good health in older age was noted to be associated with the nature of the symptoms which appeared to be chronic and that made participants compare their health status in their youthful years with their current status. Despite this, it was identified that the difficulty in maintaining good health by themselves predisposed older persons to utilise health care as they desired complete recovery. This desire served as a predisposing factor that made older person require continuous contact with health care services: Beliefs Aside perceived effects of ageing, beliefs were also noted to predispose older persons to utilise health care services. However, these beliefs predisposed older persons to either utilise orthodox or traditional medical services. In this regard, if an older person believed the aetiology of their illness was due to pathological process, they sought orthodox service. However, if the older person believed the disease was associated with supernatural aetiology, they sought help from the traditional practitioners in addition to the orthodox service they received. Despite these variations, all participants believed it was necessary to maintain good health as they grew older and they valued it as such: "I think this sickness is coming along with my old age so I come to the hospital because they can be of help…… I don't really believe in those stories where people say there is a curse or something. It is a disease and I have to go to the place where they treat it." (Female, 60-69 years) "Ah…………….I think it was because I offended some people in the past that is why I got that sugar disease [diabetes] so I visit the herbalist for assistance in addition to what the doctor will give me at the hospital" (Female, 50-59 years) Past health care experiences It was noted that past experiences with health care services served as a determinant that allowed them to use that service again. This was associated with how satisfied they were with previous utilisation. Older persons were able to compare services of various health care facilities and based on their past experiences, they opted for a facility they felt more comfortable with: "I have been at the hospital three years now and in the past, anytime I felt sick I came here so I feel more comfortable been there; I know most of the people there too so I get comfortable with them as well" (Male, 60-69 years) "Even though I spend more time waiting there, the Government hospital at Konongo is worse and it is terrible there so I prefer traveling here to see a doctor". (Male, 70-79 years) Enabling themes The themes noted here are family influence and support, accessibility and health care costs. Family influence and support In the presence of an illness, influence and support from close family members served as enabling determinants. Family influence was evident as some older children encouraged their parents to utilise available health services. Influence of older children was also reflected in reminding their parents of medical appointments and this allowed participants to keep track of when they had to be present at the hospital. Aside adult children, spouses were noted to be influential in enabling older persons utilise health care services: Support offered by adult children and spouses also comprised of financial support as that helped older persons pay for costs of health care utilisation: "It is my daughter who told me to go to the hospital because they have medications to help me" (Female, 70-79 years) "Some time ago, I hardly visited the hospital until I started urinating too often. I did not want to come here because I will spend the entire day here until my daughter talked me into coming" (Male, 50-59 years) "When my husband was alive, he occasionally came with me and paid the bills but now that he has died, I come alone and it is sometimes a problem for me especially when I do not have money" (Female, 70-79 years) Accessibility Aside family support, it was noted that easy accessibility to health care services was identified as a key issue and appeared to dominate most part of the interview. Accessibility represented how easy an older person could reach or come into contact with the health care facility. For some participants, the hospital or other health care facilities were within immediate reach but other participants had to travel from other places to visit the hospital. Those who stayed close to the hospital most often utilised services as soon as they felt the need to and had the available resources. However, for older persons who stayed far from the hospital, pharmacy shops were the first point of call in case it became necessary to utilise health services. However, if no positive response is noted after utilising pharmacy services, the older person proceeded to the hospital at a later day and this implied leaving the house early and having extra money for transportation. The choice of transportation to the health facility depended on how much money an older person had and how fast they hope to arrive at their destination. Thus, financial constraints implied that the older persons would not be able to visit the health facility. In other instances, participants did not turn up for medical reviews if they noted that they would arrive at the hospital late which implied been seen by a health professional late. In these cases, participants remained home. Aside participants staying at home with their symptoms, some utilised traditional medicine as it was readily accessible than the orthodox services and participants did not have to travel far. Thus, in this instance herbal preparations served as substitute and were utilised because they were within easy reach: "I go to the pharmacy shop most often because it is close by and later come to the hospital if I still do not feel any better" (Male, 50-59 years) "I leave the house very early in order to arrive here on time and see the doctor because if I come late, I am assured of getting back home late." (Female, 60-69 years) "Sometimes I do not have money for transport so I stay back home till I am able to raise some cash otherwise I will not go to the hospital. Even if I have money and time is past 8am, I will not go to the hospital because if I do I will come home very late. So I manage and live with the pain till I am able to wake up early enough and come and queue here to be seen by the doctor" (female, 60-69 years) "The herbalist is just at the community centre close to my house so I can just walk there and take some herbs, boil them and drink twice a week" (Female, 60-69 years) Accessibility also represented how easy an older person came into contact with a health professional when they arrived at the health care facility to have their health care needs addressed. Participants had to go through various processes before been seen by a health care provider and that required they leave their various homes early to queue at the hospital. Thus, coming to the hospital was noted to be characterised by long waiting times, and cumbersome with several processes for older persons: "I have to wake up early and come and drop my card at the entrance. If I come late, I will leave late so I come early so that at least I can still go and farm when I leave the hospital." (Female, 80 years) "I have to wait long hours. After I have seen the doctor and I will be asked to take my blood to the lab for investigation. I will wait for the results and I can spend all morning. It is too long for me because I will have to join another queue to see the doctor with my results and another queue to pay and collect my medication." (Male, 70-79 years) This identified problem was further compounded by inadequate sitting space which implied that some clients had to stand as they waited for their turn and they could not exit the OPD. Exiting the OPD meant that one could miss his/ her turn of seeing the doctor and as such participants will prefer standing to wait for their turns even when the OPD was full. To this end, coming to the hospital was noted to be a least preference for participants as they were likely to spend an entire day in pursuit of receiving health care. A contributing factor to long waiting hours at the hospital was identified to be inadequate number of staff (notably doctors) at the OPD. However, the noted difficulties associated with accessing hospital services made participants patronise herbal medicines as the herbal practitioners and their services were readily accessible in the community: "In the hospital, if you don't have time then you do not have to even go. I get hungry but I cannot go and eat because the nurses might call me to see the doctor when I leave. If I miss that, it will be difficult getting me to see the doctor so I have to wait with the hunger. Sometime ago, I went there with cough complaints everyone looked at me when I cough so I thought the nurses would even allow me to see the doctor and leave for the house but it was some kind of first, come first served thing." (Female, 50-59 years) "Most of the time, I have to wait for long hours to see the doctor. Sometimes we are informed there are only two doctors here and so we have to be patient to see them one after the other. After this queue, I will have to get medications at the pharmacy and that is also another long queue" (Female, 60-69 years) "Things are faster with the herbalist at the community center. There is nothing like retrieving folders or going to lab. I talk with him and he gives me medication and I am off" (Female, 70-79 years) Also, it was identified that older persons required information regarding their health status and the progress they have made in meeting treatment goals. However, in some cases participants could not obtain this information and that made them uncertain about how they are progressing. This enabled some participants to utilise herbal preparations as the herbal practitioners provided information about their conditions and offered encouragement to continue on the herbs even when no positive effects were noted by participants. This may reflect that relationship among older persons and health care providers played a part whilst accessing health services: "Sometimes I wish to spend more time talking with the doctor so that I can ask more questions but because other patients are waiting, I have to hurry and leave but I have questions bothering me." (Female, 60-69 years) "The herbalist has time to talk with me and assure me I will get better so I will keep coming here" (Female, 50-59 years) In addition to the above, it was noted that readily available medications or herbs further enabled older persons utilise a particular health care service. Older persons who utilised hospital services noted that medications were not always available and have to visit several pharmacies. In some cases, the medications were not actually available. In other instances, the medication was available but expensive and as such the older person could not afford it. However, older persons who patronised herbal preparations noted that herbs were readily available and easy to use as well: "I can easily pluck the herbs in the forest for free" (Male, 60-69 years). "Sometimes too, the medicines are not even available in the hospital and you have roam in the town searching for the drug. If I go to one or two places and I cannot get the drug, I just stop searching for the drug and stay on the herbs till it is time for the monthly review." (Female, 70-79 years). Health care cost The cost associated with health service utilisation was also identified as a factor that enabled older persons utilise health services. With the existence of the National Health Insurance Scheme, older persons were required to make cash payments for services not covered by the scheme. Thus, utilising health care services at the hospital meant having enough money to pay for the services rendered. The cost associated with hospital service utilisation was described as expensive by participants. In the presence of non-availability of money, older persons went to the extent of borrowing money to cater for their health care needs. Despite the cost of services, participants expressed the need to be totally free of their ailments but this appeared unachievable as the conditions were chronic in nature. To this end, participants noted that the health insurance scheme (NHIS) is not supportive as it offered limited help with regards to covering health care costs: "Coming to the hospital is expensive. When the NHIS came at first, I was told most of the things in the hospital were free but now I have to pay for almost everything. I don't know why it is like that but the president should do something about it. I am a farmer and if my products are not sold, it will be difficult to get money to come here" (Male, 50-59 years) However unavailability of money implied absence at the hospital for review. In some instances, they utilised part of the entire service available: "hmmmmm……… I have said from the beginning that it is expensive coming here and when I do not have money, I do not even come at all. Even with the health insurance, the laboratory test I have to do cost me 16 Ghana cedis. As for the medication, the insurance covers only one and I have to buy the other ones. Sometimes I keep the prescriptions till the following month when I have money to purchase them. It is a problem for me (caps chin in the right palm)" (Female, 50-59 years) "The NHIS used to be very helpful because it is covered almost everything in the country. Previously, I did not pay anything when I came to the hospital but now even the medications that I used to get for free with the NHIS, I have to pay for it. Going to the lab to check my sugar level too comes with costs now and that increases the amount of money I spend whenever I come here. Am sure you now understand why I said if I don't have money, I will not even try going there" (Male, 60-69 years) At some points, older persons missed medical appointments because of unavailability of money even when they were registered with the NHIS. However, costs associated with traditional medicine services were noted to be cheaper as compared to services offered at the hospital and in some cases, participants received free services from the traditional practitioners. These enabled older persons to utilise their services especially when they had limited financial resources: "I am supposed to go for check-up every month but I cannot keep to it because coming here is preparation so if I have money, I will come but sometimes the costs of the medications and services are so high that I cannot afford so I stay home" (Male, 50-59 years) "I even like the herbs because it is not for sale; I only give a small token to the herbalist and then I have the herbs. If not for my daughter, I will see the herbalist every time instead of going to the hospital" (Female, 60-69 years). Discussion According to Andersen [15] individuals must first perceive illness or the likelihood of its occurrence for the use of health services to occur. In this study, the medical condition (symptom and diagnosis) experienced by participants made them require health care. This finding corroborates with those by Kohno et al., [11] as they noted that need determinants associated with Japanese retirees utilising health services in Malaysia included perceived health need by the older person, medical symptoms and self-rated general health status. In addition to existence of symptoms, the current study noted that these symptoms interfered with the older persons' daily activities and as such they felt the need to return to their previous state of health: a finding that was not identified in previous studies and this could serve as an explanation as to why medical symptoms serve as a need factor. Furthermore, Kim and Lee [21] and Exavery et al., [12] identified existence of chronic illness as a need factor as there was a need for continuing contact with health professionals (professional evaluation). In similar lines, the current study noted that by seeking professional evaluation for their illnesses, participants were made to come for reviews monthly, two monthly or three monthly. Thus, chronic illness required on-going health utilisation so as to monitor the progress of older persons and maintain good health. Andersen et al., [13] have described predisposing determinants as those characteristics that make some persons utilise health services more than others. A theme captured in this study was perceived effects of ageing. Participants felt that as they grew older, they became ill more often and that made them require utilisation of outpatient services. Though Kim and Lee [21] noted that increasing age was associated with a lower outpatient service usage, the current study identified that increasing age predisposed an older person to utilise those services. This is because participants' associated advancing age with physical symptoms such as joint pains may require more outpatient services and it substantiates the findings of Jahangir et al., [14] as they noted that increasing age was positively associated with increased use of preventive services which were available on outpatient basis. This could mean that preventive services played a major role in the health of an older adult and as such policy may need to consider increasing those services. Furthermore, beliefs and past health care experiences were also identified to predispose older persons to utilise health services. An aspect of the findings substantiate those of Wong and Diaz [22] in that if health conditions in the past predisposed the older person to utilise health care services more and they were subsequently satisfied, they were likely to use the services again. However, a unique sub-theme which emerged from the current study was that beliefs related to the aetiology of a particular disease further predisposed an older person to utilise either orthodox or traditional services. According to Andersen [15] health beliefs include a wide range of personal thinking and behaviours such as attitudes, values and knowledge that people develop throughout their lives, pertaining to healthcare services. Andersen and Davidson [23] further argue that health beliefs influence the way people formulate ideas about their own need for healthcare services. In relation, this may mean that participants might think that some conditions may require one to visit the hospital whilst for other conditions, it may be inappropriate to visit the hospital. This assertion may reflect their past encounters with the health system. If the health service was successful in achieving optimum outcomes, the older person might want to utilise it again. However, if this did not occur and the disease was attributed to supernatural causes, traditional medicines were utilised. This specifies that both orthodox and traditional services at a particular time met a specific need of an older person. Thus, a better collaboration between these modes of service delivery may be helpful and this may require policy support. Even though individuals may be predisposed to use health care services, some means need to be available to ensure their actual usage. These factors have been described as enabling determinants [13]. For this study, enabling factors were identified as family influence and support, accessibility and health care costs. Health care costs were generally noted to be expensive by participants in this study even though they had all registered with the NHIS. This is because the Insurance Scheme which has been identified by Wong and Diaz [22] as an enabling factor is unable to meet the entire cost of health utilisation services by older persons. This implies that lack of adequate finances to supplement the NHIS may serve as a barrier to health care and as such participants either utilised traditional medicines or delayed their attendance to the hospital. This confirms the findings of Kohno et al., [11] as they noted that lack of enabling resources may delay the health care utilisation by older persons. Unique to this current study was that as participants delayed in utilising health care, they utilised traditional medicines as a substitute. However, this study could not establish the outcomes associated with this utilisation. In meeting health care costs not covered by the NHIS, it was identified from this study that family support played an essential role. This is because older persons with family support were able to make it to the hospital for reviews and participants without support missed medical appointment and used herbal preparations more frequently. This finding supports an earlier finding by Jahangir et al., [14] as they identified the existence of family income as an enabling factor. Aside finance, accessibility was also noted as an enabling factor. Accessibility was noted at four levels in this study and appeared to dominate most part of the interview: accessibility to the health facility, health professional, medications and information regarding their health status. Though some participants lived close to the health facility, others had to travel to the facility; in which case their first point of utilisation was the pharmacy. On some occasions, transport fares presented a challenge to older persons as that implied that they needed extra money to pay for the fares. Along similar lines, Exavery et al., [12] noted that ability to be transported to the health facility played a role in enabling older persons utilise health services. Though Parmar et al., [24] have noted that living far from the hospital affected the enrolment of older persons into the insurance scheme, it was identified that all participants in this study have registered with the NHIS. This variation could be related to the fact that exemption of older persons from paying premiums in Ghana has attracted greater enrolments. In terms of accessibility to the health professional, participants noted that it was time consuming with cumbersome processes. These findings are in line with those indicated by Rooy et al., [25] as they reported that long waiting times affected older persons' utilisation of hospital services and resulted in them utilising herbal products as these were readily available in the communities. Similar findings were also reported by Aboderin [26] and Goins et al., [27]. This may mean that older persons wanted to be attended to by health professionals as soon as they came to the hospital but in most cases, they did not experience that. Thus, there may be a need to improve the staff strength at the facility to enhance rapid delivery of services to patients. In addition there may be a need to encourage health professionals (nurses and physicians) to consider specialising in geriatrics so as to offer specialist care to older persons who visit the facility. In terms of medications, participants noted that they sometimes had to roam in search of them as some types of medications were unavailable in the hospital. In some instances, the medications were available but expensive in which case an older person either utilised traditional medicine or saved money to purchase the medications at a later date. In similar lines, Etowa et al., [28] have asserted that older persons usually utilised selfmedication and traditional preparations as these were cheaper and easily accessible. This might represent an avenue for policy consideration so as to foster closer working relationship between orthodox and traditional services. Participants expressed the need to have information regarding how well they were progressing but in most cases, they could not access this information and that made them uncertain. Participants who visited the traditional herbal practitioners noted that they were available to provide information which was basically explanations of the symptoms an older person was experiencing. In similar lines, Rooy et al., [25] noted that even though older persons appreciated modern health care services, they utilised herbal products due to the limited number of health care staff and an increasing health care provider to patient ratio at the hospital. As this study was conducted in a district hospital, a similar study may be carried out in a teaching hospital to compare findings as it is expected that the latter may have more specialised forms of services for all patients and as such variations in their experiences. As participants noted utilisation costs to be high, future research might consider actual costs associated with health service utilisation in comparison with those covered by the NHIS so as to enable more objective assessment of the insurance scheme and make further recommendations. Findings from this study are however limited and unique to the setting in which it was undertaken. Also the findings may be limited to older persons utilising discretionary health service (outpatient hospital services) as the study focused on them. Thus, other studies are warranted in exploring the phenomenon among older persons using other forms of health services in the district. Conclusion Findings from this study have provided an understanding of older persons' experiences with outpatient hospital services. Though older persons in the district may feel the need to utilise health services on outpatient basis, the enabling factors (notably finance) appeared to be a driving force to actual utilisation. Thus, more innovative health care financing strategies are needed to enhance the coverage of health services for older persons as even though they may feel the need to utilise the health services, financial constraints may cause a delay in the process. In addition, there may be a need to increase the staff strength so as to shorten the waiting time for older persons seeking health care as well as enhance easy accessibility to health professionals. In this regard, nurses and doctors may need to be supported to undertake specialist education in geriatrics so as to plan and execute programmes tailored to meet the needs of older persons seeking health care.
Auditory perception in the aging brain: the role of inhibition and facilitation in early processing Aging affects the interplay between peripheral and cortical auditory processing. Previous studies have demonstrated that older adults are less able to regulate afferent sensory information and are more sensitive to distracting information. Using auditory event-related potentials we investigated the role of cortical inhibition on auditory and audiovisual processing in younger and older adults. Across puretone, auditory and audiovisual speech paradigms older adults showed a consistent pattern of inhibitory deficits, manifested as increased P50 and/or N1 amplitudes and an absent or significantly reduced N2. Older adults were still able to use congruent visual articulatory information to aid auditory processing but appeared to require greater neural effort to resolve conflicts generated by incongruent visual information. In combination, the results provide support for the Inhibitory Deficit Hypothesis of aging. They extend previous findings into the audiovisual domain and highlight older adults' ability to benefit from congruent visual information during speech processing. Introduction Aging affects auditory perception in a diverse and multi-faceted manner. Presbycusis is a general term that refers to high-frequency age-related hearing loss and is present in approximately 50% of adults aged over 70 (Roth et al., 2011). It is typically characterized by a progressive loss of hearing that begins in the high-frequency ranges and subsequently advances into the middle and lower frequencies (Gates and Mills, 2005). Interestingly, auditory ability as measured via puretone-hearing threshold levels (HTLs) does not straightforwardly correlate with functional performance by older adults on auditory tasks (Schneider and Pichora-Fuller, 2000;Tun et al., 2012). Instead, functional deficits in older adults, such as impaired frequency discrimination (Schneider and Pichora-Fuller, 2000), gap detection (Schneider et al., 1994(Schneider et al., , 1998, and greater sensitivity to noise during speech perception (Helfer and Freyman, 2008;Tun et al., 2012) are better predicted by measures of executive function (Akeroyd, 2008;Houtgast and Festen, 2008;Humes, 2005). That is, frontal cortical areas appear to compensate for reduced peripheral auditory and auditory cortex activity (Wong et al., 2009), although the frontal lobe itself shows the greatest age-related linear degeneration in the cortex (Raz and Rodrigue, 2006;Raz et al., 2005). The emerging picture is that auditory perception and cognition involve a complex interplay between peripheral and central systems, each undergoing age-related changes (see also Anderson et al., 2013;Bidelman et al., 2014). The Inhibitory Deficit Hypothesis (IDH) of cognitive aging proposes that age-related deficits in performance across a wide range of perceptual, attentional, and cognitive tasks stem from an inability to inhibit the processing of irrelevant information (Hasher and Zacks, 1988). Three functions of inhibition have been distinguished: (1) controlling access of irrelevant information to the focus of attention and working memory, (2) deleting irrelevant information from attention and working memory, and (3) suppressing or restraining strong but inappropriate responses (Guerreiro et al., 2010;Hasher et al., 2007). The second and third aspects of inhibitory deficit have been well demonstrated in many studies in which older adults are able to facilitate and enhance the processing of relevant visual information, yet are unable to efficiently ignore irrelevant information (e.g., Gazzaley et al., 2005Gazzaley et al., , 2008Vallesi et al., 2009). In addition, these deficits are to some extent reversible with training that boosts frontal lobe activity (Anguera et al., 2013). Less well-established, particularly in the auditory modality, are the effects of age on the first subcomponent of inhibition, that is, controlling access to the focus of attention, We used the P50, N1, P2, N2, mismatch negativity (MMN) and P3a event-related potentials (ERPs) to examine the role of inhibition in controlling access to the focus of attention in early perceptual processing in young and older adults. The earliest component, P50, typically peaks between 40 and 70 ms, is generated bilaterally in the primary auditory cortex (Liegeois-Chauvel et al., 1994;Reite et al., 1988;Weisser et al., 2001) and reflects the regulation of sensory information from the peripheral nervous system to the cortex. The N1 typically peaks between 90 and 130 ms, and like the P50, is generated bilaterally in the primary and association auditory cortex, with generators in the transverse temporal gyri (Picton et al., 1999;Scherg et al., 1989). The P2 peaks between 150 and 200 ms with generators in the auditory association cortex and is responsive to complex acoustic features, and its modulation by learning and expertise. (Pantev et al., 1988;Shahin et al., 2005). The P50-N1-P2 complex reflects the information flow from primary auditory to association cortical processing, and the transition from tonotopic auditory processing to more complex spectral processing, and greater sensitivity to top-down regulation. The N2 family comprises the standard N2, the N2a, and the N2b. These subcomponents appear between 200 and 350 ms, and their topography, neural sources, and proposed function varies depending on subtype (Folstein and Van Petten, 2008). We focussed on the standard N2, typically observed in response to stimuli that involves inhibitory processing, for example, ignoring the standard stimulus in an oddball task (Bertoli and Probst, 2005). It has a fronto-central distribution with neural sources in the right orbito-frontal cortex and anterior cingulate (Falkenstein, 2006;Näätänen and Picton, 1986). Deficits in inhibitory processing due to age or pathology (e.g., depression, alcoholism) have been associated with reduced or absent standard N2 (Bertoli and Probst, 2005;Kaiser et al., 2003;Pandey et al., 2012;Wascher et al., 2011). The MMN represents the detection of change, in particular in the sensory environment (Escera and Coral, 2007), and can initiate the orientation of attention to novel or unexpected stimuli (Näätänen and Michie, 1979;Näätänen et al., 1978). It is calculated by subtracting the response to the frequently presented standard stimulus from that of a rare deviant stimulus and is typically characterized by a negative deflection in this difference wave during the period of 100e250 ms with neural sources located bilaterally in the superior temporal gyri and frontal lobes (for a review see Deouell, 2007). Finally, the P3a component provides the first index of selective attentional orientation and is proposed to represent the updating of working memory representations of incoming stimuli. The P3a has a fronto-central topography and is elicited in response to task-irrelevant rare events (Katayama and Polich, 1998). Age affects these ERPs differentially. Early P50 and N1 amplitudes have been shown to increase and latencies decrease in older adults, explained as a reduction in frontal regulation of afferent sensory input (see Friedman, 2008 for a comprehensive review). No demonstrable pattern emerges for P2, while there is limited but consistent evidence for a reduction in N2 amplitudes (Ceponiene et al., 2008). Oddball paradigms are valuable tools in assessing both distraction and inhibition in older adults. Measures of distraction, for example, P3a or incorrect behavioral responses to task-irrelevant deviant stimuli, have been demonstrated to be slower in older adults (Andrés et al., 2006). Inhibition can be observed in oddball paradigms through responses to repeating, task-irrelevant standard stimuli. In aging, the P50 or N1 amplitudes to standard stimuli have been shown to be enhanced as a result of less efficient inhibition of irrelevant information (Friedman, 2008), whereas the N2 response to standard stimuli which is considered to reflect a halt in the processing of an irrelevant stimulus (see Section 4) has been shown to be reduced or absent in older adults (Bertoli and Probst, 2005). There is mixed evidence for age-related changes to MMN, with considerable variance in experimental design, analysis techniques and controlling for age-related hearing loss contributing to conflicting results (see Cheng et al., 2013 for a meta-analysis and review). The P3a response is typically reduced or delayed in healthy aging, suggesting a reduced attentional orientation response in older adults (Czigler et al., 2006;Fabiani and Friedman, 1995;see Friedman, 2008 for a review; Knight, 1987;Walhovd and Fjell, 2001). However, there is evidence that the P3a habituates in younger adults, but not in older adults, suggesting that attentional capture by rare or novel stimuli may be greater in aging (Alperin et al., 2014;Friedman et al., 1998). Speech processing provides a useful tool to examine whether the effects of age on perception and attention in auditory processing extend to audiovisual processing and whether older adults are still able to benefit from the presence of visual cues and "facilitate" the processing of relevant sensory information. As an experimental stimulus, speech is equally ecologically valid in both its audiovisual and auditory forms. Viewing a speaker while listening to continuous speech has been shown to be equivalent to a 15 dB increase in the auditory signal (Sumby and Pollack, 1954), whereas simply observing silent visual speech articulation activates the primary and association auditory cortices (Calvert et al., 1997). The boost given to auditory processing by visual information is however dependent on the congruency and predictive value of the visual articulation (van Wassenhove et al., 2007;Winneke and Phillips, 2011). It appears that much of the benefit derived from multisensory speech is maintained in aging. Older adults' behavioral performance in speech perception studies using multisensory and unisensory speech stimuli has been demonstrated to be comparable to younger adults (Sommers et al., 2005;Tye-Murray et al., 2010) and their sensitivity to the McGurk illusion as equivalent to younger adults (Cienkowski and Carney, 2002;Huyse et al., 2014). It has been proposed that audiovisual integration is exceptionally robust to, and may even be enhanced by, aging, and that enhancement may be a compensatory process for unisensory processing deficits (Diederich et al., 2008;Peiffer et al., 2007;Winneke and Phillips, 2011). What remains to be addressed is how multisensory processing performance in older adults contributes to more general theories of cognitive aging. Following the predictions of the IDH, congruent visual information should aid auditory processing and be maintained in older adults, and incongruent visual information however should serve as a greater distractor to older adults. We conducted 2 experiments to investigate the role of facilitation and inhibition in auditory processing in aging. Experiment 1 examined the effects of age on auditory processing of puretone and natural speech stimuli. We hypothesized that older adults experience deficits in the inhibition of auditory information that would manifest as increased early sensory responses (P50 and N1), as a consequence of reduced frontal lobe regulation of afferent sensory information. In addition, we hypothesized that older adults would show a reduced N2 to standard stimuli and reduced auditory mismatch negativity (aMMN) as a consequence of their inability to successfully ignore or inhibit the processing of repeating standard stimuli. Experiment 2 examined whether the patterns of age-related change observed in experiment 1 extended to audiovisual speech processing. In addition, by manipulating the congruency of the accompanying visual information, we were able to examine the influence of "relevant" versus "distracting" visual information. We hypothesized that older adults should still be able to facilitate relevant information, that is, congruent visual information but would show greater distraction or interference from incongruent visual information. Experiment 1 The experiment consisted of 2 paradigms. First, puretones were presented in a passive listening paradigm providing measures of basic auditory processing (P50, N1, and P2) and inhibitory processing (N2). Second, natural speech syllables were presented in an oddball paradigm which in addition to the measures provided by the passive listening paradigm (P50, N1, P2, and N2) also provided measures of change detection and attentional orientation (MMN and P3a, respectively, to task-irrelevant deviant stimuli). Participants Twenty younger adults (aged 18e23, mean age 19.5 [AE1.5], 5 males) and 26 healthy older adults (aged 62e88, mean age 76.0 [AE7.0], 14 males) gave consent to participate in the study. Younger adults were recruited from the University of Bristol student population and declared themselves to be in normal health. Older adults were recruited by the Avon and Wiltshire and South Gloucestershire Primary Care Trust memory service clinics at the Bristol Research into Alzheimer's and Care of the Elderly Centre, Frenchay Hospital, and the Research Institute for the Care of Elderly People, Royal United Hospital, Bath. They participated as part of a wider study into dementia as healthy controls. Each older adult was assessed by memory clinic staff and displayed normal cognitive function in relation to their age and educational attainment (mean mini-mental state examination Score ¼ 28.5/30 [AE1.2]) and none met clinical criteria for dementia or any other neuropsychological disorder. No older adults had history or signs of stroke or transient ischemic attack, significant head injury, depression, or other psychiatric disorder, or major neurological disease, and none were receiving medication (prescribed or non-prescribed) deemed likely to affect cognitive function. All had normal or corrected-to-normal vision and were right hand dominant. All appropriate approvals for our procedures were obtained from the National Research Ethics Service Committee South West-Bristol, Ref. 09/H0106/90. Participants provided written informed consent before participating and were free to withdraw at any time. Puretones Stimuli were 1000-Hz puretones presented binaurally through headphones at a fixed volume of approximately 60-dB sound pressure level (SPL). The duration of the tones was 200 ms with a mean interstimulus interval (ISI) of 560 ms, varying randomly between 480 and 640 ms. Auditory speech Stimuli were digitally recorded samples (audio sample rate: 44.1 KHz in 16 bits) of a female speaker pronouncing the syllables /ba/ (standard), /da/ (deviant), and /bi/ (target). The /ba/ syllable was the audio recording taken from the /ba/ video used in the audiovisual paradigm (see experiment 2) ensuring that the standard stimuli were acoustically identical in both experiments. Stimuli were presented binaurally through headphones at approximately 60-dB SPL above the participant's HTL. The duration of the stimuli was 325 ms with a mean ISI interval of 620 ms, varying randomly between 520 and 720 ms. Stimuli were matched for intensity using Praat software (Boersma and Weenink, 2009). Hearing threshold level assessment and adjustment Participants' HTLs were assessed using a Bekesy threshold procedure (Haupt, 2003). The auditory standard /ba/ and deviant /da/ stimuli were used as stimuli in the HTL test rather than puretones to provide an ecologically appropriate measure of HTL. Puretone stimuli were presented at a fixed 60-db SPL and not adjusted for individuals' HTL. Auditory speech stimuli were presented at approximately 60-db SPL above the participant's individual HTL, which required an increase in the stimuli SPL by 8.98 (AE2.82) dB for younger adults and 17.80 (AE7.33) dB for older adults. This ensured that any age-related differences observed in responses to the auditory-only or audio-visual speech could be compared against a paradigm in which HTL had not been adjusted to dissociate the effects of age from the effects of the physical intensity of the stimulus. In addition, correlational analyses between ERP amplitudes and HTL are presented in Supplementary Material. 2.3.1.1. Puretones. Participants were instructed to listen to the tones, to not respond in any way, and to maintain their gaze at a fixation point on the monitor. Two hundred tones were presented. 2.3.1.2. Auditory speech. Participants were instructed to maintain their gaze at a fixation cross in the centre of the screen while listening to a continuous stream of syllables, consisting of the frequent standard syllable /ba/ interspersed with an infrequent deviant /da/ and infrequent target /bi/. They were asked to press a button in response to the target stimulus. They were instructed to ignore the standard and deviant stimuli. The target and deviant stimuli were presented in a pseudo-random sequence among the standards with at least 2 standards preceding each deviant. Eight hundred and ninety six standards, 112 deviants (i.e., standard:deviant ratio ¼ 8:1) and 8 targets were presented in 2 blocks lasting 8 minutes each. (Initially, no target stimulus was included to exactly match the audiovisual paradigm in experiment 2. However, pilot data revealed that the lack of task, combined with the lack of visual stimulation led to participants becoming drowsy and subsequent overwhelming alpha wave contamination of the evoked potentials. Therefore, a rare target stimulus was introduced to maintain the attentional and physiological arousal of the participant. The number of target stimuli was very low to maintain as much congruity with the audiovisual paradigm as was possible.) EEG recording Electroencephalographic (EEG) signals were sampled at 1000 Hz from 64 Ag/AgCl electrodes fitted on a standard electrode layout elasticized cap using a BrainAmp DC amplifier (Brain Products GmbH) with a common FCz reference and online low-pass filtered at 250 Hz. Impedances were below 5kU. Recordings were analyzed offline using Brain Electrical Source Analysis software v5.3 (BESA GmbH). Artifacts including blinks and eye movements were corrected using BESA automatic artifact correction (Berg and Scherg, 1994), and any remaining epochs containing artifacts !100 mV were rejected. The rejection rate never exceeded 10% of trials for each participant and stimulus. EEG analysis Data were re-referenced offline to a virtual linked mastoid reference, using BESA spherical spline interpolation (BESA GmbH). Epochs from À100 to 500 ms around stimulus onset were defined for the auditory and puretone data. Given the well-established scalp distribution of auditory ERPs (i.e., peak amplitude typically occurring at the vertex) and after confirmation via examination of the topography of each component in each group (see Fig. 1C), the values of 9 electrodes (FC1, FCz, FC2, C1, Cz, C2, CP1, CPz, and CP2) were averaged to form a vertex region of interest. Averaging across electrodes that show consistent and comparable activity has also been demonstrated to be more reliable than using single electrodes (Huffmeijer et al., 2014). Grand average waveforms were used to select peak latency measurement epochs, see Supplementary Material. P50 was defined as the first positive maximum value following stimulus onset, N1, P2, N2, and P3 peaks were defined as sequential polarity maxima. Peak magnitude was measured as the mean amplitude during epochs defined by 1 SD around the mean peak latency. To calculate the aMMN, the averaged response to the standard stimuli was subtracted from the deviant stimuli to create a difference waveform. Sequential 1 sample t-tests were then applied to the difference waveforms for each group using the method outlined by Guthrie and Buchwald (1991). The consecutive time points necessary to indicate an epoch of significant difference between the standard and deviant responses were obtained from a simulation using an autocorrelation estimated from the data. Intervals with values of p < 0.05 that lasted for the required duration, (14 consecutive time points [i.e., 14 ms] for the healthy older adults, 7 for the younger adults), were accepted as significantly different epochs. An aMMN amplitude was then calculated as the mean amplitude of any significant negative deflection in the difference waveform (as identified by the sequential t-test procedure) following the N1 peak, and aMMN peak latency as the most negative deflection in the difference wave. Statistical analysis For the puretone paradigm, the amplitudes and latencies of the P50, N1, P2, and N2 were examined in a 1-way (age: young vs. old) analysis of variance (ANOVA). For the auditory speech paradigm, the amplitudes and latencies of the 4 major auditory ERPs (P50, N1, P2, and N2) were examined individually in a 2 (age: young vs. old) Â 2 (condition: standard vs. deviant) ANOVA. An aMMN and P3a to deviants were examined separately in a 1-way (age: young vs. old) ANOVA. Puretone ERPs Averaged ERPs to puretones for younger and older adults are shown in Fig. 1. Auditory speech ERPs Averaged ERPs to auditory-only speech for younger and older adults are shown in Fig. 2 was weak evidence for an interaction between age and condition (F [1,44] ¼ 3.74, p ¼ 0.060). There was no significant effect of age interaction between age and condition (F [1,44] ¼ 5.58, p ¼ 0.023) due to a more pronounced effect of condition on N1 latency in younger adults. Discussion Experiment 1 compared ERPs from younger and older adults to puretones, and to simple speech stimuli (syllables /ba/ and /da/) presented in an oddball paradigm. Older adults showed earlier P50 latencies followed by increased N1 amplitudes compared with younger participants for puretones. Similarly, older adults showed increased P50 and N1 amplitudes for auditory speech. Increased amplitudes of early sensory components in older participants under conditions when the HTL was adjusted (auditory speech) or not adjusted for (puretones) demonstrate that the effect was not a simple consequence of physically more intense/louder stimuli (see Supplementary Material for correlational analyses between ERP amplitudes and HTL). Critically, older adults' N2 response to regular repeating stimuli (i.e., the puretone stimulus and to the standard in the auditory speech paradigm) was absent or strongly reduced compared with younger adults. As expected, no N2 peak was found in response to deviant stimulus in either group (Bertoli and Probst, 2005). Older adults' P3a responses to deviant stimuli were increased and delayed. The combination of increased early sensory responses P50 and N1, absent or strongly reduced N2 to standard stimuli, and an increased P3a to deviant stimuli in older participants points to a decreased ability to inhibit responses to regular repeating information, and greater attentional capture from rare task-irrelevant information. There was no effect of age on P2 amplitudes or latencies in either paradigm, and aMMN in older adults was equivalent in amplitude and peak latency to that in younger adults. The implications of these findings for cognitive theories of aging are discussed in full in the Section 4 Experiment 2 Experiment 2 extended experiment 1 into the audiovisual domain, using the same participants as in experiment 1. In experiment 2, we manipulated the congruency of the visual information accompanying the auditory speech stimulus to examine the influence of "relevant" versus "distracting" visual information. We expected that facilitatory effects of congruent visual information will be preserved in older adults, however that the interference from incongruent visual information will increase with age. Audiovisual speech Stimuli were digitally recorded videos (frame rate: 25 images/s; audio sample rate: 44.1 KHz in 16 bits) of a female speaker pronouncing the syllables /ba/ and /ga/. Videos were digitally edited using Pinnacle software v.15 (Corel Inc) to ensure that the onset of syllabic articulatory movements, auditory onset, and auditory duration in both videos were identical. The videos were 1280 ms long with articulatory onset at 240 ms and auditory onset at 560 ms, see Fig. 3. The duration of the auditory stimuli was 325 ms. The standard stimulus was the video of the speaker pronouncing /ba/. The deviant stimulus was created by overdubbing the audio track from the /ba/ video onto the silent video of the speaker pronouncing /ga/. The combination of auditory /ba/ and visual /ga/ typically elicits the McGurk illusion (McGurk and MacDonald, 1976) fused percept of /da/. A summary of the syllables and percepts in both auditory and audiovisual paradigms is presented in Table 1. The mean ISI between videos of 620 ms varied randomly between 520 and 720 ms. During the ISI, a still frame of the speaker's face was presented on screen. This image was matched to the first and last frame of the videos, creating the impression of continuous natural speech, that is, no visual onset or offset. Stimuli were matched for auditory intensity using Praat software (Boersma and Weenink, 2009). Behavioral discrimination task Participants completed a discrimination task at the end of the EEG recording session. They were presented 25 congruent /ba/ and 50 incongruent McGurk /da/ videos, identical to those used in the audiovisual paradigm. In addition, 25 congruent /ga/ videos were presented (i.e., /ga/ video with congruent /ga/ auditory stimulus). Participants were instructed to watch the speaker's face at all times and to report the syllable they heard using a handheld response button box. The videos were presented in a fully randomized sequence lasting approximately 3 minutes. Audiovisual speech The videos were presented on a computer monitor 0.5 m directly in front of the participant. The auditory stimuli were presented binaurally through headphones at approximately 60-dB SPL above the participant's HTL following the same adjustment procedure as in experiment 1. Participants were instructed to attend to the speaker, listen to what was said and watch the speaker's face at all times. The standard /ba/ and deviant /da/ (i.e., McGurk) stimuli were presented in a pseudo-random sequence with at least 2 standards preceding each deviant. The ratio of standards:deviants was 8:1. Eight hundred and ninety six standards and 112 deviants were presented in 2 blocks lasting 12 minutes each. EEG recording techniques and analyses were identical to experiment 1. Epochs from À100 ms to 1500 ms were used for the audiovisual data in experiment 2. The influence of visual information on speech processing in aging To examine the role of visual information on speech processing in aging, we compared responses to the standard stimuli across the auditory and audiovisual paradigms as they were perceptually and acoustically identical in both paradigms, that is, the participant heard and perceived a /ba/ syllable. Deviant stimuli were not compared as although they were perceptually the same (i.e., auditory ¼ "spoken" /da/, audiovisual ¼ "illusory" /da/), they were acoustically different (i.e., auditory deviant ¼ "spoken" /da/, audiovisual deviant ¼ "spoken" /ba/). Statistical analysis The amplitudes and latencies of the 4 major auditory ERPs (P50, N1, P2, and N2) were examined individually in a 2 (age: young vs. old) Â 2 (condition: standard vs. deviant) ANOVA. The influence of visual information on auditory processing and its interaction with age was examined using a mixed design ANOVA. A 2 Â 2 ANOVA with factors group (young/old) and visual information (absent [¼auditory]/present [¼audiovisual]) was performed for the P50, N1, P2, and N2 responses to standard stimuli. No MMN or P3a response was observed; however, an extended period of positivity following the P2 peak was observed in the older adults' responses. To quantify group differences in this response, the mean amplitude of the difference wave (deviant minus standard) between 800e1500 ms was examined in 1-way (age: young vs. old) ANOVA. Behavioral discrimination task One older adult did not complete the task due to tiredness. There was no significant difference between the groups in the number of McGurk /da/ illusions perceived (Younger mean (M) ¼ 72% SE ¼ 8.78, Older M ¼ 75% SE ¼ 6.12; t [1,43] ¼ À0.33, p ¼ 0.740), or the number of congruent /ba/ or /ga/ syllables correctly identified The influence of visual information on speech processing in agingecomparison of auditory versus audiovisual ERPs To examine the role of visual information on speech processing in aging, we compared responses to the standard stimuli across the auditory (experiment 1) and audiovisual (experiment 2) paradigms. Recall that standard stimuli in the auditory and audiovisual paradigms were perceptually and acoustically identical, that is, the participant heard and perceived a /ba/ syllable. Deviant stimuli were not compared across the auditory (experiment 1) and audiovisual (experiment 2) paradigms, as the stimuli were acoustically different, that is /da/ in experiment 1 and /ba/ in experiment 2. The presence of visual information significantly delayed N1 latencies (F [1,44] ¼ 4.85, p ¼ 0.033). There was no significant effect of age (F [1,44] ¼ 0.49, p ¼ 0.826) or interaction between age and visual information (F [1,44] ¼ 1.85, p ¼ 0.181). N2. There was some evidence for the presence of visual information to increase N2 amplitude, although the effect was marginally significant (F [1,44] ¼ 3.79, p ¼ 0.058). Older adults showed significantly reduced N2 amplitudes across paradigms (F [1,44] ¼ 16.26, p < 0.001). There was a significant interaction between age and the presence of visual information (F [1,44] ¼ 5.45, p ¼ 0.024) as visual information increased N2 amplitude in older but not younger adults. Discussion In experiment 2, older adults showed equivalent behavioral sensitivity to the McGurk illusion as younger adults. In terms of ERPs, experiment 2 replicated the pattern of inhibitory deficit in older adults observed in experiment 1. Older adults showed significantly increased P50 and N1 amplitudes compared with younger adults and no observable N2 peak to standard stimuli. The effect of congruency of visual information was also considerable, with congruent visual information increasing N1 amplitudes in both younger and older adults compared with incongruent visual information. Examination of the effects of the presence of visual information (via comparison of ERPs in response to standard stimuli in the audiovisual paradigm in experiment 2 vs. auditory paradigm in experiment 1) demonstrates similarly enhanced N1 amplitudes when (congruent) visual information is added to accompany an auditory stimulus in younger and older adults. This suggests that the facilitatory effect of visual information is maintained in aging. General discussion Across puretone, auditory and audiovisual speech paradigms older adults showed a consistent pattern of inhibitory deficits, manifested as increased P50 and/or N1 amplitudes and an absent or significantly reduced N2. Experiment 1 demonstrated that this pattern was present in auditory processing regardless of adjustment for HTL or the acoustic complexity of the stimuli, whereas experiment 2 demonstrated that this pattern extended to audiovisual processing. We propose that these findings provide evidence for IDH, in particular for the claim that older adults are less able to regulate the access to attentional focus of afferent sensory information. Experiment 2 also provided further insights into the role of visual information in auditory processing in aging. Congruent articulatory visual information enhanced N1 amplitudes for the audiovisual compared with auditory speech in both younger and older adults. When the effect of congruent and incongruent visual information were compared in audiovisual processing, incongruent information resulted in a prolonged, late positivity in older adults. An increase in early auditory ERP amplitudes in healthy older adults as compared with younger adults has been previously demonstrated (see Ceponiene et al., 2008;Friedman, 2008) and proposed to reflect a lack of inhibitory regulation of afferent sensory information by the prefrontal cortex. The absence of the standard N2 in older adults is less well documented and often overlooked in analyses in favor of examining responses to deviants or targets (e.g., Daffner et al., 2015). When addressed directly, the absence or reduction of the N2 has been associated with poorer gap detection and processing speed (Harris et al., 2012) and proposed to reflect a lack of frontal inhibition among older adults (Bertoli and Probst, 2005;Ceponiene et al., 2008;Getzmann et al., 2015;Zendel and Alain, 2014). The N350 is a possible equivalent observed in sleep EEG, it is observed during sleep and sleepiness and reflects a mechanism contrary to attention, preventing conscious processing of stimuli and facilitating falling asleep (Kállai et al., 2003). It is speculative and beyond the remit of the present study to link the inhibitory negative components observed in sleep with the N2 observed in younger adults, but is a deserved avenue for future research. We propose that the standard N2 in our present study represents a neural "stop-signal" that serves to prevent further unnecessary processing of a repetitive stimulus and that it is consistently and markedly absent in older adults across a variety of auditory processing tasks. This is a critical element of the first aspect of inhibition, that is, controlling access of irrelevant information to the focus of attention and working memory (Guerreiro et al., 2010;Hasher et al., 2007) and is an automatic and integral part of sensory processing. Note that in experiment 2, younger adults showed a clear N2 to the deviant stimulus, a finding that we did not predict. The incongruity of the audio-visual deviant affected even the earliest ERPs thus making effects in later windows such as the N2 window difficult to interpret (see discussion of the absent audiovisual mismatch negativity below). One possibility that is suggested by the presence of an N2 to both standards and deviants is that N2 is based on the basis of the auditory stimulus only (/ba/, i.e., identical for the standard and deviant), that is, it remains relatively unaffected by audiovisual binding compared to earlier (P50 and N1) components. Another possibility is that the N2 to deviants is an N2b, reflecting the direct attention to stimuli (Patel and Azzam, 2005). This is notably different from the auditory speech paradigm, in which no N2 was observed to deviant stimuli, raising the possibility that attentional focus was greater to the audiovisual speech paradigm. Experiment 2 provides support for previous assertions that older adults maintain the ability to process relevant information yet are more susceptible to the distracting and interfering effects of irrelevant information (e.g., Cashdollar et al., 2013;Gazzaley et al., 2005Gazzaley et al., , 2008. Congruent articulatory visual information significantly increased N1 amplitude in both younger and older adults compared with auditory processing alone, demonstrating that older adults are still able to facilitate and enhance the processing of relevant (visual) information. This adds to the behavioral findings of Sommers et al. (2005) in which older adults received an equivalent visual "enhancement" of auditory processing in noise to younger adults. It should be noted that the interpretation of this increased N1 as a facilitatory effect, rather than as a marker of inhibitory deficit, is due to the underlying assumption that the younger adults' ERPs are the default "healthy" response, that is, because younger adults show an increase in N1 amplitude in the audiovisual paradigm, this is the baseline against which to compare. Older adults' P50 and N1 latencies were delayed, and younger adults showed earlier P50 latencies, in the presence of visual information, suggesting there may be a temporal cost to maintaining the benefit of congruent visual information with age. This adds to the findings of Diederich et al. (2008) who demonstrated that compared with younger adults, older adults showed slower overall multisensory integration during a saccadic reaction time task yet showed a greater neural benefit from congruent compared with incongruent multisensory stimuli. Older adults also perceived the McGurk illusion with comparable frequency to younger adults, replicating previous behavioral findings demonstrating maintained audio-visual integration in healthy aging (Cienkowski and Carney, 2002;Huyse et al., 2014). The increase in N1 amplitude as a consequence of the presence of congruent visual articulatory information is contrary to some previous ERP studies of audio-visual processing (e.g., van Wassenhove et al., 2007;Winneke and Phillips, 2011) in which audio-visual N1 amplitudes were reduced and latencies shortened compared with those in the auditory-alone condition, a pattern interpreted as reflecting increased neural efficiency. However, a critical difference between the present study and previous studies is that participants were not asked to respond to stimuli, and there was no distinct visual onset, that is, the speaker's face was onscreen at all times. Therefore, the interaction of attention, task demands, and the alerting effect of a distinct visual onset may affect the influence of predictive visual information on the timing and magnitude of auditory ERPs (Kok et al., 2012). Among older adults only, incongruent visual information resulted in a prolonged late positive deflection following the P2 that lasted the duration of the epoch. We suggest that the late positive deflection observed in the present study may reflect the increased processing effort required by older adults to reanalyse/revise mismatching visual articulatory and auditory information. Such increased processing has been previously demonstrated in psychophysical tasks in which older adults show a larger impact of distracting information on perceptual abilities as a result of prolonged processing of distractors (Cashdollar et al., 2013). Most relevantly, a similar late positivity has been demonstrated by Liu et al (2011) in response to incongruent audio-visual scenarios in which the action in the video (e.g., fireworks explode) mismatched the preceding audio (e.g., shattering glass). The authors related their finding to the linguistic "P600" effect, which is known to reflect a reanalysis or revision of incongruent syntactic information into a plausible or meaningful arrangement (Osterhout and Holcomb, 1992). Further research is needed to elucidate whether the late positivity observed in our study belongs to the same family of effects. Older adults showed an equivalent aMMN to younger adults, contrary to many previous studies of aMMN (e.g., Kiang et al., 2009), and contrary to our predictions. Interestingly, the MMN response appears to be robust to the preceding impact of inhibitory deficit on N1 amplitudes. The adjustment of HTL may explain the discrepancy with previous findings. It is well-known that aMMN amplitudes increase as the standard and deviant become more discriminable (Schröger et al., 1992). Therefore, it is possible that previous studies that did not adjust for individual HTLs and simply ensured that all participants had HTLs below a common threshold (e.g., Alain et al., 2004;Cooper et al., 2006;Kiang et al., 2009) were presenting less discriminable stimuli for the older adults, resulting in a reduced aMMN. The use of natural speech stimuli, rather than puretones, may have also contributed to the maintenance of aMMN in older adults in the present study. The few electrophysiological studies on speech MMN in older adults show conflicting results, e.g., Bellis et al. (2000) found no effects of age on MMN to syllables, whereas Cheng et al. (Cheng et al., 2015) showed a reduction amongst older adults in the magnetic MMN to speech syllables. In addition the measurement of the aMMN differed from previous studies, that is, we used sequential t-tests to identify the duration of the aMMN response and then measured the mean amplitude during this bespoke epoch as opposed to taking mean amplitudes during arbitrarily defined epochs, e.g., from 100-200 ms, or for 100 ms following the N1 peak. In fact, if such arbitrarily defined fixed epochs were used to measure the aMMN in the present study it would show a lower mean aMMN amplitude in older adults. This is because the older adults' aMMN response was 26 ms shorter than that in younger adults, which would have resulted in a lower mean aMMN amplitude for older than younger adults if it were measured using a fixed window. By accurately identifying the duration of the aMMN response we are more able to accurately assess its magnitude in each group. A possible interpretation of the current data is that aMMN is impacted by healthy aging, but it is the duration rather than the amplitude of the response that is reduced. For this hypothesis to be tested further, aMMN paradigms should be optimized to enable calculation of the duration of individuals' aMMN responses as well as group responses. Interestingly, despite the lack of large differences in the MMN, older adults also showed increased and delayed P3a responses to deviant stimuli, suggesting greater neural resources devoted to the attentional orientation toward deviant stimuli. These findings complement previous studies showing increases in the P3a in older adults (e.g., Alperin et al., 2014;Daffner et al., 2015) and provide further support for the idea that older adults find it harder to ignore task-irrelevant information; a fundamental element of the IDH. No MMN response was observed in the audiovisual paradigm. In the present study, congruency of visual information significantly affected both the P50 and N1 peaks, consequently the pre MMN epoch (i.e., from stimulus onset to the N1 peak) was not equal for standard and for deviants. If deviant stimuli did elicit an MMN response it may have been masked by the preceding peak amplitude differences. Previous studies that have demonstrated a McGurk illusion MMN (Colin et al., 2002;Kislyuk et al., 2008;Saint-Amour et al., 2007) had 2 important methodological differences. First, there was a distinct visual onset and offset of the visual information, that is, the speaker's image appeared at the start and disappeared at the end of each trial. Second, visual-only ERPs were subtracted from the audio-visual ERPs to calculate the auditory ERPs. In the present study, the speaker's face remained onscreen at all times, so there was no distinct visual onset and offset. This provided more ecologically valid speech stimuli, avoided the confounding effect of visual-onset ERPs but may have compromised the measurement of the audiovisual MMN. There are some limitations to the present study. First, attention was not directly controlled for. This raises the possibility that increased early sensory P50 and N1 amplitudes may have been a consequence of additional attentional effort among the older adult group, for example, as an attempt to compensate for any deterioration in hearing ability. Note that the absence/reduction of the standard N2 in the older participants cannot be solely an outcome of extra attention because Bertoli and Probst (2005) previously demonstrated such an age-related N2 reduction in both attended and unattended auditory oddball paradigms. Hence, the age-related reduction in the standard N2 cannot be explained away by extra attention from the older participants but rather a genuine difference in the "stop-signal" process that the standard N2 reflects. Second, there were no behavioral measures of performance to assess the consequences of any inhibitory deficit in early sensory processing. Third, the ISIs were not equal in the auditory speech and audiovisual speech paradigms, possibly introducing a confound in the comparison of ERP amplitudes across paradigms. Future studies should investigate the role of attention and ISIs on inhibition and facilitation in older adults and explicitly examine the link between neural and behavioral responses. Finally, to further characterize oddball responses in both audio-visual and auditory-only paradigms, an additional audiovisual condition with a congruent deviant stimulus, e.g., visual /da/ þ auditory /da/, would allow for the comparison of both standard and deviant stimuli responses across paradigms. In summary, we have demonstrated a pattern of age-related auditory processing that is consistent with the IDH. Older adults consistently show increased early sensory ERPs, and an absence of a standard N2 which in combination reflects a deficit in the frontal regulation of sensory processing. Older adults are still able to use congruent visual articulatory information to aid auditory processing, but at a temporal cost, and appear to require greater neural effort to resolve conflicts generated by incongruent visual information. Future work should focus on establishing the neural mechanisms of frontal regulation of sensory processing, and how these mechanisms change with age. Disclosure statement The authors have no conflicts of interest to disclose.
Character varieties for real forms Let Γ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varGamma $$\end{document} be a finitely generated group and G a real form of SLn(C)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm {SL}_n(\mathbb {C})$$\end{document}. We propose a definition for the G-character variety of Γ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varGamma $$\end{document} as a subset of the SLn(C)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm {SL}_n(\mathbb {C})$$\end{document}-character variety of Γ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varGamma $$\end{document}. We consider two anti-holomorphic involutions of the SLn(C)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm {SL}_n(\mathbb {C})$$\end{document} character variety and show that an irreducible representation with character fixed by one of them is conjugate to a representation taking values in a real form of SLn(C)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm {SL}_n(\mathbb {C})$$\end{document}. We study in detail an example: the SLn(C)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm {SL}_n(\mathbb {C})$$\end{document}, SU(2,1)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm {SU}(2,1)$$\end{document} and SU(3)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm {SU}(3)$$\end{document} character varieties of the free product Z/3Z∗Z/3Z\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb {Z}/{3}\mathbb {Z}*\mathbb {Z}/{3}\mathbb {Z}$$\end{document}. Introduction Character varieties of finitely generated groups have been widely studied and used, whether from the point of view of algebraic geometry or the one of geometric structures and topology. Given a finitely generated group Γ , and a complex algebraic reductive group G, the Gcharacter variety of Γ is defined as the GIT quotient X G (Γ ) = Hom(Γ , G)//G. It is an algebraic set that takes account of representations of Γ with values in G up to conjugacy by an element of G. See the articles of Sikora [30] and Heusener [19] for a detailed exposition of the construction. Whenever Γ has a geometric meaning, for example when it is the fundamental group of a manifold, the character variety reflects its geometric properties. For SL 2 (C)-character varieties, we can cite for example the construction of the A-polynomial for knot complements, as detailed in the articles of Cooper, Culler, Gillet, Long and Shalen [3], and Cooper and Long [4], or the considerations related to volume and This work was carried out while the author was at IMJ-PRG (UPMC, Paris, France) and IECL (Université de Lorraine, Nancy, France). B Miguel Acosta miguel.acosta@uni.lu 1 Unité de recherche en Mathématiques, Université du Luxembourg, Maison du Nombre 6, Avenue de la Fonte, 4364 Esch-sur-Alzette, Luxembourg the number of cusps of a hyperbolic manifold, as well as ideal points of character varieties treated by Morgan and Shalen in [23], Culler and Shalen in [5] and the book of Shalen [29]. On the other hand, SL 2 (C)-character varieties of compact surface groups are endowed with the Atiyah-Bott-Goldman symplectic structure (see for example [15]). In the construction of character varieties, we consider an algebraic quotient, namely Hom(Γ , G)//G where G acts by conjugation. The existence of this quotient as an algebraic set is ensured by the Geometric Invariant Theory (as detailed for example in the article of Sikora [30]), and it is not well defined for a general algebraic group, nor when considering a non algebraically closed field. Besides that, for the compact form SU(n), the classical quotient Hom(Γ , SU(n))/SU(n), taken in the sense of topological spaces, is well defined and Hausdorff. See the article [10] of Florentino and Lawton for a detailed exposition. Furthermore, if G is a complex reductive group, the G-character variety is identified with the set of closed orbits of Hom(Γ , G)/G. If K is a maximal compact subgroup of G, some recent results prove there is a strong deformation retraction from the set of closed orbits of Hom(Γ , G)/G to Hom(Γ , K )/K . When G is a complex or real algebraic reductive group, this fact is proven for Abelian groups by Florentino and Lawton in [12], for free groups by Casimiro, Florentino, Lawton and Oliveira in [2], and for nilpotent groups by Bergeron in [1]. The quotient Hom(Γ , SU(n))/SU(n), that Procesi and Schwartz show in their article [28] to be a semi-algebraic set, can be embedded in the SL n (C)-character variety; we give a proof of this last fact in Sect. 3.2. Similar quotients for other groups have been studied by Parreau in [25], in which she studies completely reducible representations and in [26], where she compactifies the space of conjugacy classes of semi-simple representations taking values in noncompact semisimple connected real Lie groups with finite center. It is then natural to try to construct an object similar to a character variety for groups G which are not in the cases stated above, for example real forms of SL n (C). For the real forms of SL 2 (C), Goldman studies, in his article [13], the real points of the character variety of the rank two free group F 2 and shows that they correspond to representations taking values either in SU (2) or SL 2 (R), which are the real forms of SL 2 (C). Inspired by this last approach, we will consider SL n (C)-character varieties and will try to identify the points coming from a representation taking values in a real form of SL n (C). For a finitely generated group Γ , we introduce two involutions Φ 1 and Φ 2 of the SL n (C)-character variety of Γ induced respectively by the involutions A → A and A → t A −1 of SL n (C). We show the following theorem: In the second section of this article, we recall the definition of SL n (C)-character varieties with some generalities and examples that will be studied further. In the third section, we recall some generalities on real forms of SL n (C), we propose a definition for "character varieties for a real form" as a subset of the SL n (C)-character variety and we show Theorem 1 by combining Propositions 4 and 5 in order to identify those character varieties beneath the fixed points of involutions Φ 1 and Φ 2 . At last, in Sect. 4, we study in detail the SU(3) and SU(2, 1)-character varieties of the free product Z/3Z * Z/3Z. This particular character variety has an interesting geometric meaning since it contains the holonomy representations of two spherical CR uniformizations: the one for the Figure Eight knot complement given by Deraux and Falbel in [7] and the one for the Whitehead link complement given by Parker and Will in [24]. Remark 3 We have a projection map Hom(Γ , SL n (C)) → X SL n (C) (Γ ). Two representations ρ, ρ ∈ Hom(Γ , SL n (C)) have the same image if and only if χ ρ = χ ρ . This explains the name "character variety" for X SL n (C) (Γ ). We will sometimes abusively identify the image of a representation ρ in the character variety with its character χ ρ . Semi-simple representations are representations constructed as direct sums of irreducible representations. We will use the following statement when dealing with irreducible representations. Some SL 2 (C C C) and SL 3 (C C C)-character varieties We consider here two SL 3 (C)-character varieties that we will study further: the character variety of the free group of rank two F 2 and the one of the fundamental group of the Figure Eight knot complement. We will also recall a classic result describing the SL 2 (C)-character variety of F 2 . The free group of rank 2 We denote here by s and t two generators of the free group of rank two F 2 , so F 2 = s, t . We will use the character varieties X SL 2 (C) (F 2 ) and X SL 3 (C) (F 2 ). Consider first the following theorem, that describes the SL 2 (C)-character variety of F 2 . A detailed proof can be found in the article of Goldman [16]. Theorem 4 (Fricke-Klein-Vogt) The character variety X SL 2 (C) (F 2 ) is isomorphic to C 3 , which is the image of Hom(F 2 , SL 2 (C)) by the trace functions of the elements s, t and st. Remark 4 Thanks to the theorem below, we know that it is possible to write the trace of the image of st −1 in terms of the traces of the images of s, t and st for any representation ρ : F 2 → SL 2 (C). By denoting S and T the respective images of s and t, the traces of the four elements are related by the trace equation: On the other hand, in his article [21], Lawton describes the SL 3 (C)-character variety of F 2 . He obtains the following result: The character variety X SL 3 (C) (F 2 ) is isomorphic to the algebraic set V of C 9 , which is the image of Hom( Remark 5 The polynomials P and Q are explicit: we can find them in the article of Lawton [21] or in the survey of Will [31]. By denoting Δ = Q 2 − 4P, the algebraic set V is a double cover of C 8 , ramified over the zero level set of Δ. Furthermore, the two roots of X 2 9 − Q(x 1 , . . . , x 8 )X 9 + P(x 1 , . . . x 8 ), as a polynomial in X 9 , are given by the traces of the commutators [s, t] and [t, s] = [s, t] −1 . More recently, Gongopadhyay and Lawton study in [17] the character variety X SL 4 (C) (F 2 ), and describe a minimal global coordinate system of order 30 for it. However, the set of relations between the traces of this minimal set is not known. The Figure Eight knot complement We state briefly some results about the SL 3 (C)-character variety of the figure eight knot complement. It is one of the very few SL 3 (C)-character varieties of three-manifolds studied exhaustively. We will come back to it in Sect. 4.4. The results were obtained independently by Falbel, Guilloux, Koseleff, Rouiller and Thistlethwaite in [9], and by Heusener, Muñoz and Porti in [20]. Denoting by Γ 8 the fundamental group of the Figure Eight knot complement, they describe the character variety X SL 3 (C) (Γ 8 Remark 6 We take here the notation R 1 , R 2 , R 3 given in [9]. These components are denoted respectively by V 0 , V 1 and V 2 in [20]. Besides determining the irreducible components R 1 , R 2 and R 3 , Falbel, Guilloux, Koseleff, Rouillier and Thistlethwaite give parameters, in Section 5 of their article [9], for explicit representations corresponding to the points of the character variety. Character varieties for real forms We are going to be interested in representations of a finitely generated group Γ taking values in some real forms of SL n (C) up to conjugacy. We will focus on the real forms SU(3) and SU(2, 1) of SL 3 (C) in the detailed example that we will consider further. In order to study the representations up to conjugacy, we will consider the SL n (C)-character variety and will try to figure out the locus of representations taking values in real forms. When n = 2, the problem was treated by Morgan and Shalen in [23] and by Goldman in his article [13]. Real forms and definition Let us first recall the classification of the real forms of SL n (C). For a detailed exposition of the results that we state, see the book of Helgason [18]. Recall that a real form of a complex Lie group G C is a real Lie group G R such that G C = C⊗ R G R . The real forms of SL n (C) belong to three families: the real groups SL n (R), the unitary groups SU( p, q) and the quaternion groups SL n/2 (H). We give the definitions of the two last families in order to fix the notation. Definition 4 Let n ∈ N and p, q ∈ N such that n = p + q. Denote by I p,q the block matrix: We define the group SU( p, q) as follows: It is a real Lie group, which is a real form of SL n (C). Definition 5 Let n ∈ N. Denote by J 2n the block matrix: We define the group SL n (H), also noted SU * (n) as follows: It is a real Lie group, which is a real form of SL 2n (C). In order to study representations taking values in real forms, we consider the following definition of character variety for a real form: Definition 6 Let G be a real form of SL n (C). Let Γ be a finitely generated group. We call the G-character variety of Γ the image of the map Hom(Γ , G) → X SL n (C) (Γ ). In this way, Remark 8 The set X G (Γ ) given by this definition is a subset of a complex algebraic set, which is not, a priori, a real nor a complex algebraic set. It is the image of a real algebraic set by a polynomial map, and hence a semi-algebraic set. The definition might seem strange if compared to the one for the SL n (C)-character variety. This is due to the fact that the real forms of SL n (C) are real algebraic groups but not complex algebraic groups and that the algebraic construction and the GIT quotient do not work properly when the field is not algebraically closed. Nevertheless, when considering the compact real form SU(n), it is possible to define a SU(n)-character variety by considering a topological quotient. We will show, in the next section, that this topological quotient is homeomorphic to the SU(n)-character variety as defined above. Furthermore, in some cases, the SU(n)-character variety is a strong deformation retraction of the SL n (C)-character variety. This fact fits in the more general frame of real reductive groups and maximal compact subgroups. It is proven for free groups by Casimiro, Florentino, Lawton and Oliveira in [2], for Abelian groups by Florentino and Lawton in [12] and for nilpotent groups by Bergeron in [1]. The character variety X SU(n) (0) as a topological quotient Let n be a positive integer. We are going to show that the quotient Hom(Γ , SU(n))/SU(n), considered as a topological space and where SU(n) acts by conjugation, is naturally homeomorphic to the character variety X SU(n) (Γ ). Let us notice first that a map between these two sets is well defined. Indeed, since two representations taking values in SU(n) which are conjugate in SU(n) are also conjugate in SL n (C), the natural map Hom(Γ , SU(n)) → X SL n (C) (Γ ) factors through the quotient Hom(Γ , SU(n))/SU(n). Proof We consider X SU(n) (Γ ) as a subset of X SL n (C) (Γ ) ⊂ C m , endowed with the usual topology of C m . By definition, we know that the map Hom(Γ , SU(n))/SU(n) → X SU(n) (Γ ) is continuous and surjective. Since a continuous bijection between a compact space and a Hausdorff space is a homeomorphism, it is enough to show that the map is injective. We want to show that if ρ 1 , ρ 2 ∈ Hom(Γ , SU(n)) are representations such that χ ρ 1 = χ ρ 2 , then ρ 1 and ρ 2 are conjugate in SU(n). It is the statement of Lemma 2, that we prove below. In order to prove Proposition 1, we are going to show the following lemma, which seems standard despite the lack of references. 1 Lemma 1 Let ρ 1 , ρ 2 ∈ Hom(Γ , SU(n)). If they are conjugate in SL n (C), then they are conjugate in SU(n). Proof Let us deal first with the irreducible case, and treat the general case after that. First case: The representations ρ 1 and ρ 2 are irreducible. Let G ∈ SL n (C) such that ρ 2 = Gρ 1 G −1 . Let J be the matrix of the Hermitian form n i=1 x i y i , which is preserved by the images of ρ 1 and ρ 2 . Since ρ 2 = Gρ 1 G −1 , we know that the image of ρ 1 also preserves the form t G J G. But ρ 1 is irreducible: its image preserves then a unique Hermitian form up to a scalar. We deduce that J = λ t G J G with λ ∈ R. Since J is positive definite, we have λ > 0, and, by replacing G by √ λG, we have J = t G J G i.e. that G ∈ SU(n). General case. Since ρ 1 is unitary, it is semi-simple, and therefore can be written as ρ (1) The same holds for ρ 2 , which can be written as ρ (1) 2 ⊕· · ·⊕ρ (m ) 2 acting on F 1 ⊕· · ·⊕ F m , also in orthogonal sum. Since ρ 1 and ρ 2 are conjugated in SL n (C), an irreducible representation has the same multiplicity for ρ 1 and for ρ 2 . Therefore, m = m and, perhaps after rearranging the terms, we can suppose that ρ (i) act on E i and are unitary, conjugated and irreducible, there exists G i ∈ SU(E i ) that conjugates them by the first case. Anti-holomorphic involutions and irreducible representations In this section, we find the locus of character varieties for the real forms of SL n (C) inside the SL n (C)-character variety X SL n (C) (Γ ). Before focusing on irreducible representations, we show the following proposition, which ensures that two character varieties for two different unitary real forms intersect only in points which correspond to reducible representations. Proof Suppose that ρ is irreducible. It is then, up to conjugacy, the only representation of character χ ρ . Since χ ρ ∈ X SU( p,q) , we can suppose that ρ takes values in SU( p, q). Then, for every g ∈ Γ , we have t ρ(g)J p,q ρ(g) = J p,q . On the other hand, let us assume that ρ is conjugate to a representation taking values in SU( p , q ). Hence there exists a matrix J p ,q , conjugated to J p ,q , such that, for every g ∈ Γ , we have t ρ(g)J p ,q ρ(g) = J p ,q . We deduce that, for every g ∈ Γ , The matrix (J p,q ) −1 J p ,q commutes with the whole image of Γ . Since ρ is irreducible, it is a scalar matrix. We deduce that J p,q has either the same signature as J p ,q , or the opposite signature. From now on, we will limit ourselves to irreducible representations and will consider two anti-holomorphic involutions of the SL n (C)-character variety, which induce anti-holomorphic involutions on the character variety. We will denote by φ 1 and φ 2 two antiholomorphic automorphisms of the group SL n (C), given by φ 1 These two involutions induce anti-holomorphic involutions Φ 1 and Φ 2 on the representation variety Hom(Γ , SL n (C)), in such a way that, for a representation ρ, We will still denote these involutions on X SL n (C) (Γ ) by Φ 1 and Φ 2 . We will denote by Fix(Φ 1 ) and Fix(Φ 2 ) the points in X SL n (C) (Γ ) fixed respectively by Φ 1 and Φ 2 . Remark 9 If ρ ∈ Hom(Γ , SL n (C)) is conjugate to a representation taking values in SL n (R), then χ ρ ∈ Fix(Φ 1 ). Furthermore, if is conjugate to a representation taking values in SL n/2 (H), then χ ρ ∈ Fix(Φ 1 ), since a matrix A ∈ SL n/2 (H) is conjugated to A. On the other hand, if ρ is conjugate to a representation taking values in SU( p, q), then χ ρ ∈ Fix(Φ 2 ). Indeed, if A is a unitary matrix, then it is conjugated to t A −1 . In this way, From now on, we will work in the reciprocal direction. We will show that an irreducible representation with character in Fix(Φ 1 ) or Fix(Φ 2 ) is conjugate to a representation taking values in a real form of SL n (C). The corresponding statements can be proven in a more general frame by considering the involution A → PΦ(A)P −1 if Φ equals Φ 1 or Φ 2 . The involution will define a real form containing the image of an arbitrary lift ρ of a character χ ρ . However, this abstract proof gives no hint on how to determine the corresponding real form. We give here an elementary proof, that allows to determine the real form if a conjugating matrix for ρ and Φ(ρ) is known. Let us begin with the case of Fix(Φ 2 ), which corresponds to unitary groups. The result is given in the following proposition: Proposition 4 Let ρ ∈ Hom(Γ , SL n (C)) be an irreducible representation such that χ ρ ∈ Fix(Φ 2 ). Then there exists p, q ∈ N with n = p + q such that ρ is conjugate to a representation taking values in SU( p, q). Proof We know that χ ρ ∈ Fix(Φ 2 ), so the representations ρ and Φ 2 (ρ) have the same character. Since ρ is irreducible, ρ and Φ 2 (ρ) are conjugate. Then there exists P ∈ GL n (C) such that, for every g ∈ Γ , we have Pρ(g)P −1 = t ρ(g) −1 . By considering the inverse, conjugating and transposing, we obtain, for every g ∈ Γ , that By replacing t ρ(g) −1 in the expression, we deduce that The matrix P −1t P commutes to the whole image of ρ. But ρ is irreducible, so there exists λ ∈ C such that P −1t P = λId. By taking the determinant, we know that |λ| = 1. Up to multiplying P by a square root of λ, we can suppose that λ = 1. We then have that P = t P, which means that P is a Hermitian matrix. We have then a Hermitian matrix P such that, for every g ∈ Γ , t ρ(g)Pρ(g) = P. The representation ρ takes then values in the unitary group of P. Denoting by ( p, q) the signature of P, the representation ρ is then conjugate to a representation taking values in SU( p, q). Let us see now the case of Fix(Φ 1 ), which corresponds to representations taking values in SL n (R) or SL n/2 (H). The result is given in the following proposition: Proposition 5 Let ρ ∈ Hom(Γ , SL n (C)) be an irreducible representation such that χ ρ ∈ Fix(Φ 1 ). Then ρ is conjugate to either a representation taking values in SL n (R), or a representation taking values in SL n/2 (H) (when n is even). We are going to give a proof of this statement inspired from the proof of Proposition 4. An alternative proof can be done by adapting the proof given by Morgan and Shalen in the third part of their article [23] for the SL 2 (C) case. Lemma 3 Let P ∈ SL n (C) such that P P = Id. Then, there exists Q ∈ GL n (C) such that P = Q Q −1 . This fact is an immediate consequence of Hilbert's Theorem 90, which says that H 1 (Gal(C/R), SL n (C)) is trivial. We give here an elementary proof. Proof We search Q of the form Q α = αId+α P. Those matrices satisfy trivially Q α P = Q α . It is then sufficient to find α ∈ C such that det(Q α ) = 0. But det(Q α ) = α n det(P + α α Id), so any α such that − α α is not an eigenvalue of P works. Lemma 4 Let P ∈ SL 2m (C) such that P P = −Id. Then there exists Q ∈ GL 2m (C) such that P = Q J 2m Q −1 . Proof We search Q of the form Q α = −αId − α J 2m P. Those matrices satisfy trivially Q α P = α P + α J 2m = J 2m Q α . It is then sufficient to find α ∈ C such that det(Q α ) = 0. But det(Q α ) = α 2n det(J 2m P − α α Id), so any α such that α α is not an eigenvalue of J 2m P works. The matrix P P commutes to the whole image of ρ. But ρ is irreducible, so there exists λ ∈ C such that P P = λId. In particular, P and P commute, so, by conjugating the equality above, we have λ ∈ R. Furthermore, by taking the determinant, we have λ n = 1, hence λ = ± 1 and P P = ± Id. We have two cases: First case: P P = Id. By Lemma 3, there exists Q ∈ SL n (C) such that P = Q Q −1 . We deduce that for all g ∈ Γ : This means that the representation Q −1 ρ Q takes values in SL n (R). Second case: P P = −Id. By taking the determinant, we see that this case can only happen if n is even. Let m = n 2 . By Lemma 4, there exists Q ∈ GL 2m (C) such that P = Q J 2m Q −1 . We deduce that, for all g ∈ Γ : This means that the representation Q −1 ρ Q takes values in SL m (H). With the propositions above, we showed that an irreducible representation with character in Fix(Φ 1 ) or Fix(Φ 2 ) is conjugate to a representation taking values in a real form of SL n (C). By combining Propositions 4 and 5 we obtain immediately a proof of Theorem 1. A detailed example: the free product We are going to study in detail the character varieties X SU(2,1) (Z/3Z * Z/3Z) and X SU(3) (Z/3Z * Z/3Z). We will begin by studying the character variety X SL 3 (C) (Z/3Z * Z/3Z) inside the variety X SL 3 (C) (F 2 ) given by Lawton in [21]. We will then focus on the fixed points of the involution Fix(Φ 2 ), that will give us the two character varieties with values in real forms, and we will finally describe them in detail and find the slices parametrized by Parker and Will in [24] and by Falbel, Guilloux, Koseleff, Rouillier and Thistlethwaite in [9]. The character variety X SL 3 (C C C) (Z/3Z In this section, we will study the character variety X SL 3 (C) (Z/3Z * Z/3Z). First, notice that Z/3Z * Z/3Z is a quotient of the free group of rank two F 2 . Thanks to Remark 2, we are going to identify X SL 3 (C) (Z/3Z * Z/3Z) as a subset of X SL 3 (C) (F 2 ) ⊂ C 9 . Let us begin by making some elementary remarks on order 3 elements of SL 3 (C). Remark 11 -If S ∈ SL 3 (C), then the characteristic polynomial of S is -If S ∈ SL 3 (C) is of order 3, then S 3 − Id = 0. Hence the matrix S is diagonalizable and admits as eigenvalues cube roots of 1. We will denote these cube roots by 1, ω and ω 2 . We can now identify the irreducible components of X SL 3 (C) (Z/3Z * Z/3Z), thanks to the following proposition: First case: S or T is a scalar matrix. Suppose, for example, that S = ω i Id with i ∈ {0, 1, 2}. Since T is of finite order and hence diagonalizable, the representation is totally reducible, and it is conjugate to either a representation of the form , or a representation given by Considering the symmetries, we obtain 15 points of the character variety, classified by the traces of S and T in the following way (where i, j ∈ {0, 1, 2}): Since the traces of S and T are different for these 15 points and both 0 in the second case, the points are isolated in X SL 3 (C) (Z/3Z * Z/3Z). This polynomial is irreducible. Indeed, if it were not, it would be equal to a product of two polynomials of degree 1 in x. By replacing z , w and w by 0, we would obtain a factorization of the form x 2 + 3x + z 3 + 9 = (x − R 1 (z))(x − R 2 (z)), with R 1 (z)R 2 (z) = z 3 + 9 and R 1 (z) + R 2 (z) = −3. By considering the degrees of the polynomials R 1 and R 2 we easily obtain a contradiction. Since the polynomial defining X 0 is irreducible, X 0 is an irreducible component of the algebraic set X SL 3 (C) (Z/3Z * Z/3Z). Furthermore, it can be embedded into C 5 and it is a ramified double cover of C 4 . Reducible representations in the component In order to complete the description of the character variety X SL 3 (C) (Z/3Z * Z/3Z), we are going to identify the points corresponding to reducible representations. The 15 isolated points of the algebraic set come from totally reducible representations; it remains to determine the points of the component X 0 corresponding to reducible representations. We consider here X 0 ⊂ C 5 , with coordinates (z, z , w, w , x) corresponding to the traces of the images of (st, (st) −1 , st −1 , ts −1 , [s, t]) respectively. We denote by X red 0 the image of reducible representations in X 0 (z, z , w, w , x) correspond to a reducible representation, then Δ(z, z , w, w ) = 0. Indeed, for a reducible representation, the two commutators [s, t] and [t, s] have the same trace, and the polynomial X 2 − Q(z, z , w, w )X + P(z, z , w, w ) has a double root equal to those traces. Remark 12 If the coordinates We are going to show that the locus of the characters of reducible representations is a set of 9 complex lines, which intersect at six points with triple intersections, corresponding to totally reducible representations. Before doing the proof let us fix a notation for these lines. For i, j ∈ {0, 1, 2}, let Each L (i, j) is a complex line parametrized by the coordinate z (or w), and these lines intersect with triple intersections at the six points of coordinates (z, w) = (0, 3ω j ) and (z, w) = (3ω i , 0), where i, j ∈ {0, 1, 2}. With this notation, we can state in a simpler way the proposition describing the points of X 0 corresponding to reducible representations. Proposition 7 The points of X 0 corresponding to reducible representations are exactly those in the lines L (i, j) . In other terms, we have Proof We are going to show a double inclusion. Let us first show that Let ρ ∈ Hom(Z/3Z * Z/3Z, SL 3 (C)) be a reducible representation such that χ ρ ∈ X 0 . Let S = ρ(s) and T = ρ(t). Since the representation is reducible, we can suppose, after conjugating ρ, that with S , T ∈ SL 2 (C), and recover the points of the other lines L (i, j) by considering (ω i S, ω j T ). Since the image of every reducible representation of this form satisfies z = z , w = w and z + w = 3, we have to show that any z ∈ C can be written as 1 + tr(S T ) with S , T ∈ SL 2 (C) of order 3 and trace −1. Fix z ∈ C. Since the SL 2 (C)-character variety of F 2 is isomorphic to C 3 by the trace maps of two generators and their product, there exist matrices S , T ∈ SL 2 (C) such that (tr(S ), tr(T ), tr(S T )) = (−1, −1, z − 1). In this case, the two matrices S and T have trace −1 and hence order 3, and we have z = 1 + tr(S T ). The fixed points of the involution 8 2 We are going to describe here the character varieties X SU(2,1) (Z/3Z * Z/3Z) and X SU(3) (Z/3Z * Z/3Z) as fixed points of the involution Φ 2 of X SL 3 (C) (Z/3Z * Z/3Z). In this technical subsection, we will choose coordinates and find equations that describe the fixed points of Φ 2 . We will identify the characters corresponding to reducible representations as lying in an arrangement of 9 lines, and show that the ones corresponding to irreducible representations are in a smooth manifold of real dimension 4. We will describe the set obtained in this way in Sect. 4.4. From now on, we will only consider the points of Fix(Φ 2 ) ∩ X 0 . Recall that we have identified Remark 15 If (z, z , w, w , x) ∈ Fix(Φ 2 ) ∩ X 0 , then z = z and w = w. In this case, the polynomials P and Q, that we will denote by P(z, w) and Q(z, w), take real values. Furthermore, we can write the discriminant of X 2 − Q(z, w)X + P(z, w) as where f (z) = |z| 4 − 8Re(z 3 ) + 18|z| 2 − 27 is the function described by Goldman in [14] which is nonzero in the traces of regular elements of SU(2, 1) (positive for loxodromic elements, negative for elliptic elements). In a point of Fix(Φ 2 ) ∩ X 0 , the two roots of X 2 − Q(z, w)X + P(z, w) are the traces of the images of the commutators [s, t] and [t, s]. Since these commutators are inverses, and since we are in Fix(Φ 2 ), the two roots are complex conjugate, which is equivalent to Δ(z, w) ≤ 0. Proposition 8 We have: Proof We are going to show a double inclusion. The first one is given by Remark 15. Let us show the second. From now on, we will consider Fix(Φ 2 ) ∩ X 0 as The projection on the first two coordinates is a double cover of {(z, w) ∈ C 2 | Δ(z, w) ≤ 0} outside from the level set Δ(z, w) = 0, where points have a unique pre-image. We are going to identify the points corresponding to reducible representations, and then show that outside from these points, the SU(2, 1) and SU(3)-character varieties are smooth manifolds. z, w, w, x). The following assertions are equivalent: Proof It is an immediate consequence of Proposition 7 and the fact that we are in Fix(Φ 2 ) and hence, in the coordinates (z, z , w, w , x) ∈ C 5 , we have z = z and w = w. At last, we check that Δ(z, 3 − z) = 0. If we are in the setting of the equivalence, we have Δ(z, w) = 0. Description of X SU(2,1) (Z/3Z We are going to describe here the character varieties X SU(2,1) (Z/3Z * Z/3Z) and X SU(3) (Z/3Z * Z/3Z). In order to do it, we are going to study in detail Fix(Φ 2 ), verify that it is the union of the two character varieties, and that their intersection corresponds to reducible representations. We finally consider two slices of Fix(Φ 2 ), that were studied respectively by Parker and Will in [24] and by Falbel, Guilloux, Koseleff, Rouiller and Thistlethwaite in [9]. First, consider the 15 isolated points of X SL 3 (C) (Z/3Z * Z/3Z), which are all in Fix(Φ 2 ). They correspond to totally reducible representations. Since an order 3 matrix is conjugated to a matrix in SU(2, 1) and SU(3), we have the following remark: It remains to consider the representations of X 0 ∩ Fix(Φ 2 ). Proposition 2 ensures us that points corresponding to irreducible representations are exactly in one of the character varieties X SU(2,1) (Z/3Z * Z/3Z) and X SU(3) (Z/3Z * Z/3Z). For the points of X 0 corresponding to reducible representations, we briefly modify the proof of Proposition 7 in order to obtain the following remark: Furthermore, by noticing that an irreducible representation cannot take values at the same time in SU(2, 1) and in SU (3), we obtain the following proposition: At last, we are going to draw some slices of Fix(Φ 2 ), corresponding to projections on coordinates (z, w), followed by a restriction to a slice of the form z = z 0 or w = w 0 . Recall that the projection on coordinates (z, w) is a double cover outside from the level set Δ(z, w) = 0, where points have a unique pre-image. We draw, in a plane of the form (z, w 0 ), the curve Δ(z, w 0 ) = 0, and then we identify the regions contained in X SU(2,1) (Z/3Z * Z/3Z) and those contained in X SU(3) (Z/3Z * Z/3Z). The Parker-Will slice In their article [24], Parker and Will give an explicit parametrization of representations of Z/3Z * Z/3Z = s, t taking values in SU(2, 1) such that the image of st is unipotent. This corresponds exactly to representations such that the trace of the image of st is equal to 3. They form a family of representations of the fundamental group of the Whitehead link complement containing the holonomy representation of a spherical CR uniformization of the manifold. This particular representation has coordinates (z, w, x) = (3, 3, 15+3i ). We can see this slice in Fig. 1. We see three lobes corresponding to representations taking values in SU(2, 1), which intersect at a singular point, of coordinate z = 0, which corresponds to a totally reducible representation, of coordinates (z, w, x) = (0, 3,3). Going back to coordinates (z, w, x) on X 0 ∩ Fix(Φ 2 ), the representations of the slice z = 0 form, topologically, three spheres touching at a single point. The slice w 0 = 3.5 + 0.1i. The region is smooth Fig. 4 The slice w 0 = 1 + 0.1i. The region is smooth The Thistlethwaite slice In the last section of their article [9], Falbel, Guilloux, Koseleff, Roullier and Thistlethwaite give an explicit parametrization of representations lifting the irreducible components R 1 , R 2 and R 3 of X SL 3 (C) (Γ 8 ), as we saw in Sect. 2.2.2. They also give necessary and sufficient conditions for a representation to take values in SU(2, 1) or SU(3): therefore they parametrize lifts of the intersections of R 1 and R 2 with X SU(2,1) (Γ 8 ) and X SU(3) (Γ 8 ). Recall that the fundamental group of the figure eight knot complement has the following presentation: Γ 8 = g 1 , g 2 , g 3 | g 2 = [g 3 , g −1 1 ], g 1 g 2 = g 2 g 3 As noticed by Deraux in [6] and by Parker and Will in [24], if G 1 , G 2 and G 3 are the images of g 1 , g 2 and g 3 respectively by a representation with character in R 2 , then (G 1 G 2 ) = (G 2 1 G 2 ) 3 = G 4 2 = Id. Setting T = (G 1 G 2 ) −1 and S = (G 2 1 G 2 ), we have two elements of SL 3 (C) of order 3 which generate the image of the representation, since G 1 = ST , G 3 = T S and G 2 = (T ST ) −1 = (T ST ) 3 . Hence we can consider R 2 ⊂ X SL 3 (C) (Z/3Z * Z/3Z): this component corresponds to the slice of coordinate w = 1, since T ST has order 4 if and only if tr(T ST ) = tr(ST 2 ) = tr(ST −1 ) = 1. We can see this slice in Fig. 2. It has three regions of representations taking values in SU(2, 1) and a region of representations taking values in SU (3). They intersect at three singular points, corresponding to reducible representations. Going back to coordinates (z, x) on on the slice w = 1 of X 0 ∩ Fix(Φ 2 ), these regions are the images of four topological spheres which intersect at three points. Other remarkable slices At last, to complete the whole picture, we describe three more slices of X 0 ∩ Fix(Φ 2 ). Recall that, thanks to Proposition 9, a slice of the form w = w 0 will only have singular points if w 3 0 ∈ R. On the one hand, in Fig. 3, we see the slices w = 3.5 and w = 3.5 + 0, 1i. In each one there are three regions corresponding to irreducible representations taking values in SU(2, 1), which intersect, in the slice w = 3.5 at three points corresponding to reducible representations. There are no points corresponding to representations with values in SU (3). On the other hand, in Fig. 4, we see the slice w = 1 + 0, 1i: there are three regions corresponding to irreducible representations taking values in SU(2, 1) and a region corresponding to representations taking values in SU(3).
Uncertainty and Sensitivity Analysis of the In-Vessel Hydrogen Generation for Gen-III PWR and Phebus FPT-1 with MELCOR 2.2 : In this study, uncertainty and sensitivity analyses were performed with to study the hydrogen generation (figure-of-merit (FoM)) during the in-vessel phase of a severe accident in a light water reactor. The focus of this work was laid on a large generation-III pressurized water reactor (PWR) and a double-ended hot leg (HL) large break loss of coolant accident (LB-LOCA) without a safety injection (SI). The FPT-1 Phebus integral experiment emulating LOCA was studied, where the experiment outcomes were applied for the plant scale modelling. The best estimate calculations were supplemented with an uncertainty analysis (UA) based on 400 input-decks and Latin hypercube sampling (LHS). Additionally, the sensitivity analysis (SA) utilizing the linear regression and linear and rank correlation coefficients was performed. The study was prepared with a new open-source MELCOR sensitivity and uncertainty tool (MelSUA), which was supplemented with this work. The FPT-1 best-estimate model results were within the 10% experimental uncertainty band for the final FoM. It was shown that the hydrogen generation uncertainties in PWR were similar to the FPT-1, with the 95% percentile being covered inside a ~50% band and the 50% percentile inside a ~25% band around the FoM median. Two different power profiles for PWR were compared, indicating its impact on the uncertainty but also on the sensitivity results. Despite a similar setup, different uncertainty parameters impacted FoM, showing the difference between scales but also a significant impact of boundary conditions on the sensitivity analysis. Hydrogen during a Severe Accident In the course of a severe accident (SA) in a light water reactor (LWR), large quantities of hydrogen gas can be generated. Prior to the reactor pressure vessel (RPV) failure, hydrogen is generated during core degradation because of the exothermic oxidation of core materials. The primary source is the interaction of zirconium, mainly from cladding, with steam by the exothermic reaction: Zr + 2H 2 O → 2H 2 +ZrO 2 . Other less essential reactions are present, which generate some hydrogen-steam interaction with steel and steam reaction with control poison (e.g., B 4 C). After the RPV failure, hydrogen can be the product of various ex-vessel molten core concrete interactions (MCCI). In principle, it can also be generated by radiolysis of water. Hydrogen production is a problem for both pressurized water reactors (PWR) and boiling water reactors (BWR), as it was observed during the partial meltdown in Three Mile Island 2 and during a series of meltdowns in the Fukushima Daiichi plant [1][2][3]. In nuclear power plants (NPP) with PWR reactors, the hydrogen is a threat for safety barriers and especially containment buildings. It creates a risk of global hydrogen combus- Figure 1. The MELCOR nodalization of the bundle section, including CVH, HS, FL, and COR models. All dimensions in meters. Based on [29]. Gen-III PWR Model The basic idea was to use outcomes of the FPT-1 modelling in plant scale simulations In consequence, the gained experience was applied with the PWR model studied in this work. The plant is a generic four-loop unit with thermal power of 4500 MWth and is considered as a representative for the generation III European NPP fleet (see Table 1). It was defined and developed in the NARSIS research project [27,31]. An earlier version of the model with a complex containment nodalizaton scheme was studied for an SBO accident in [32]. In this work, the model was updated, a simple single node containment was added to reduce computational effort, and the core setup was based on FPT-1 outcomes and recent MELCOR practices. Figure 3 presents the RPV nodalization scheme for thermal-hydraulics and flow packages (CVH + FL), the core modelling (COR) package, and the heat structures package (HS). All dimensions in meters. Based on [29]. Energies 2021, 14, x FOR PEER REVIEW 4 of 29 Figure 1. The MELCOR nodalization of the bundle section, including CVH, HS, FL, and COR models. All dimensions in meters. Based on [29]. Gen-III PWR Model The basic idea was to use outcomes of the FPT-1 modelling in plant scale simulations. In consequence, the gained experience was applied with the PWR model studied in this work. The plant is a generic four-loop unit with thermal power of 4500 MWth and is considered as a representative for the generation III European NPP fleet (see Table 1). It was defined and developed in the NARSIS research project [27,31]. An earlier version of the model with a complex containment nodalizaton scheme was studied for an SBO accident in [32]. In this work, the model was updated, a simple single node containment was added to reduce computational effort, and the core setup was based on FPT-1 outcomes and recent MELCOR practices. Figure 3 presents the RPV nodalization scheme for thermal-hydraulics and flow packages (CVH + FL), the core modelling (COR) package, and the heat structures package (HS). The COR package model has nineteen axial levels and six rings. The active core re gion has twelve axial levels (7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18) and one non-active fuel level above and below the ac tive core (6 and 19). The core support plate is modelled by level 5, and the levels below are part of the lower plenum. The core region has five CVs, each per one ring. The CVH model was not divided into axial levels to improve computational efficiency, and the ap proach is equivalent to the FPT-1 modelling approach (Figure 3). Despite being simple, is expected to represent the typical plant model for LB-LOCA and it is considered to b detailed enough for a low-pressure large LOCA-type scenario. Additionally, models contain a core bypass volume and single downcomer, lowe plenum, upper plenum, and upper head volumes-see Figure 3. The lower plenum vo ume, which is essential, contains the core plate COR model, and it was observed to hav a significant effect on model convergence and hydrogen generation. Two models wer studied, one with a peaked power profile and one with a power profile similar to FPT-1see Figure 2. The power profile was not studied as a part of uncertainty analysis, as th impact of the initial and boundary conditions was not part of this research. The reactor coolant system (RCS) model is presented in Figure 4. The model has tw loops; one is a single loop with a pressurizer, called "broken," and the second is a comb nation of three other loops, and it is called "intact." The nodalization strategy is based o previous research [14,33]. Other plant details were also modelled, including the mai steam line with safety valves, the pressurizer with safety valves and accident valves, th The COR package model has nineteen axial levels and six rings. The active core region has twelve axial levels (7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18) and one non-active fuel level above and below the active core (6 and 19). The core support plate is modelled by level 5, and the levels below are part of the lower plenum. The core region has five CVs, each per one ring. The CVH model was not divided into axial levels to improve computational efficiency, and the approach is equivalent to the FPT-1 modelling approach (Figure 3). Despite being simple, it is expected to represent the typical plant model for LB-LOCA and it is considered to be detailed enough for a low-pressure large LOCA-type scenario. Additionally, models contain a core bypass volume and single downcomer, lower plenum, upper plenum, and upper head volumes-see Figure 3. The lower plenum volume, which is essential, contains the core plate COR model, and it was observed to have a significant effect on model convergence and hydrogen generation. Two models were studied, one with a peaked power profile and one with a power profile similar to FPT-1-see Figure 2. The power profile was not studied as a part of uncertainty analysis, as the impact of the initial and boundary conditions was not part of this research. The reactor coolant system (RCS) model is presented in Figure 4. The model has two loops; one is a single loop with a pressurizer, called "broken," and the second is a combination of three other loops, and it is called "intact." The nodalization strategy is based on previous research [14,33]. Other plant details were also modelled, including the main steam line with safety valves, the pressurizer with safety valves and accident valves, the pressurizer relief tank connected with the containment, passive accumulators, main feedwater and auxiliary feedwater systems. Stable steady-state conditions were obtained long before the accident, and they agreed with the plant definition [31]. The steady-state covered full power operation for~5000 s before the event. The initiating event (IE) was a guillotine double-ended hot leg (HL) large break loss of coolant accident (LB-LOCA). The hot leg break is located on the broken loop with the pressurizer. It was postulated that all active safety injection systems were not available since the beginning of the event. Stable steady-state conditions were obtained long before the accident, and they agreed with the plant definition [31]. The steady-state covered full power operation for ~5000 s before the event. The initiating event (IE) was a guillotine double-ended hot leg (HL) large break loss of coolant accident (LB-LOCA). The hot leg break is located on the broken loop with the pressurizer. It was postulated that all active safety injection systems were not available since the beginning of the event. . RCS model, including CVH, COR, and HS package models. ACC-accumulator, IRWSTin-containment refueling water system tank, HL-hot leg, CL-cold leg, SG-steam generators, DC-downcomer, SD-steam dome, EFW-emergency feedwater system, FW-feedwater, MSLmain steam line, PZR-pressurizer, PRT-pressurizer relief tank, PSV-pressurizer safety valve, PDS-pressurizer discharge system, MSRV-main steam relief valve, MSSV-main steam safety valve. Based on [32]. Uncertainty and Sensitivity Analysis Methodology The methodology applied in this work is presented in Figure 5. At the same time, it was the workflow of the recently developed MelSUA tool, which is open software [34]. The approach is divided into several steps, which are discussed in this section. The approach is similar to the SNL methodology to uncertainty analysis [35]. The first step is selecting uncertainty parameters, then identifying appropriate distributions, and providing selection justification. In a more detailed approach, a screening analysis for parameters can be performed, which can be an analytical study, e.g., parametric type sensitivity analysis for all parameters expected to affect the studied phenomenology. The alternative but less systematic approach is to use outcomes obtained by a literature survey. The second path was applied in this work; distribution types and properties were based on the literature but appropriately modified to merge outcomes of different studies. Moreover, in this work, best-estimate distributions were selected for studied parameters. It can be argued that for the study of the modelling parameters, the application of the uniform distribution could be more reasonable. However, we aimed to obtain the best estimate results as far as possible. . RCS model, including CVH, COR, and HS package models. ACC-accumulator, IRWSTin-containment refueling water system tank, HL-hot leg, CL-cold leg, SG-steam generators, DC-downcomer, SD-steam dome, EFW-emergency feedwater system, FW-feedwater, MSLmain steam line, PZR-pressurizer, PRT-pressurizer relief tank, PSV-pressurizer safety valve, PDS-pressurizer discharge system, MSRV-main steam relief valve, MSSV-main steam safety valve. Based on [32]. Uncertainty and Sensitivity Analysis Methodology The methodology applied in this work is presented in Figure 5. At the same time, it was the workflow of the recently developed MelSUA tool, which is open software [34]. The approach is divided into several steps, which are discussed in this section. The approach is similar to the SNL methodology to uncertainty analysis [35]. The first step is selecting uncertainty parameters, then identifying appropriate distributions, and providing selection justification. In a more detailed approach, a screening analysis for parameters can be performed, which can be an analytical study, e.g., parametric type sensitivity analysis for all parameters expected to affect the studied phenomenology. The alternative but less systematic approach is to use outcomes obtained by a literature survey. The second path was applied in this work; distribution types and properties were based on the literature but appropriately modified to merge outcomes of different studies. Moreover, in this work, best-estimate distributions were selected for studied parameters. It can be argued that for the study of the modelling parameters, the application of the uniform distribution could be more reasonable. However, we aimed to obtain the best estimate results as far as possible. The next step is the preparation of the template input deck and the best-estimate model for the considered system. The best-estimate analysis can proceed the whole methodology. At the same time, the analysis demands the determination of the sample size for the statistical significance of the uncertainty measures for the output variables. It is crucial when we have limited computational resources and are willing to apply a lower amount of input decks. The next step is the preparation of the template input deck and the best-estimate model for the considered system. The best-estimate analysis can proceed the whole methodology. At the same time, the analysis demands the determination of the sample size for the statistical significance of the uncertainty measures for the output variables. It is crucial when we have limited computational resources and are willing to apply a lower amount of input decks. In a typical approach to the best estimate calculations, like the best estimate plus uncertainty (BEPU) methods, the so-called Wilks-based approach is utilized; the widespread realization is the GRS methodology [36]. This approach is common in design basis accident (DBA) studies [37], but there were attempts to use it in severe accident studies. It uses order statistics to estimate confidence intervals with reasonable probability. The number of models (input decks) is based on Wilks theorem [38], e.g., 93 input-decks in order to provide the two-sided 95%/95% probability and confidence level for two-sided coverage of the distribution, and sometimes it is called the standard tolerance limit (STL) [36]. In this work, considering uncertainty analysis, we calculated over 400 input decks per case, and this approach can be treated more like a typical Monte Carlo study with distributions based on statistics with relatively large samples. It is similar to SOARCA UA, where they studied about 900 input-decks [39]. It has to be highlighted that, in this work, we did not use Wilks. However, considering Wilks, for 400 input decks, it corresponds to a least 90%/99% confidence/probability for the two-sided statistical tolerance (at least 388 runs) for the FoM. The 95%/95% demands 93 runs, and 99%/95% demands 130 runs. Nevertheless, comparisons are not straightforward, and the reader should remember about limitations discussed in the previous paragraph. In the next stage, the sampling of parameters with proper methods should be performed. The most popular approaches are simple random sampling (sometimes simply called "Monte Carlo") or Latin hypercube sampling (LHS), but other more sophisticated approaches are available in the literature [40]. In this work, the LHS type sampling was applied to generate parameters based on selected probability distributions assuming that the parameters are independent. The LHS should not be used for tolerance limits estimation with Wilks type analysis, as argued in [41]. The primary motivation of applying LHS in this work was to cover an ample space of states with fewer samples. Severe accident In a typical approach to the best estimate calculations, like the best estimate plus uncertainty (BEPU) methods, the so-called Wilks-based approach is utilized; the widespread realization is the GRS methodology [36]. This approach is common in design basis accident (DBA) studies [37], but there were attempts to use it in severe accident studies. It uses order statistics to estimate confidence intervals with reasonable probability. The number of models (input decks) is based on Wilks theorem [38], e.g., 93 input-decks in order to provide the two-sided 95%/95% probability and confidence level for two-sided coverage of the distribution, and sometimes it is called the standard tolerance limit (STL) [36]. In this work, considering uncertainty analysis, we calculated over 400 input decks per case, and this approach can be treated more like a typical Monte Carlo study with distributions based on statistics with relatively large samples. It is similar to SOARCA UA, where they studied about 900 input-decks [39]. It has to be highlighted that, in this work, we did not use Wilks. However, considering Wilks, for 400 input decks, it corresponds to a least 90%/99% confidence/probability for the two-sided statistical tolerance (at least 388 runs) for the FoM. The 95%/95% demands 93 runs, and 99%/95% demands 130 runs. Nevertheless, comparisons are not straightforward, and the reader should remember about limitations discussed in the previous paragraph. In the next stage, the sampling of parameters with proper methods should be performed. The most popular approaches are simple random sampling (sometimes simply called "Monte Carlo") or Latin hypercube sampling (LHS), but other more sophisticated approaches are available in the literature [40]. In this work, the LHS type sampling was applied to generate parameters based on selected probability distributions assuming that the parameters are independent. The LHS should not be used for tolerance limits estimation with Wilks type analysis, as argued in [41]. The primary motivation of applying LHS in this work was to cover an ample space of states with fewer samples. Severe accident analysis was considered, and we motivated our decision that it is more important to cover a larger space of solutions than to maintain rigor (resulting from pure random sampling). This approach can be questionable and considered an unnecessary complication. It can be argued that 400 samples are large enough to cover the proper space of solutions and that LHS is not needed. For each parameter, 400 values were sampled and then introduced into the MELCOR model by the variable input functionality [42]. A detailed discussion of sampled parameters is presented in Section 2.4, which follows. It should be mentioned that each case was calculated with a different randomly selected seed for the random generator. The further step is the automatic generation of a batch of models. Later, simulations are performed for models and output files are generated. The post-processing of output files is necessary. Then, phenomenological and statistical analysis covering distributions for a figure of merit, confidence intervals, and eventual study of outliers and correlations can be performed (see Figure 5). Regarding sensitivity analysis, a simple approach was applied, using Pearson, Spearman, and Kendall correlation coefficients, dedicated to indicate simple first-order correlations. Additionally, a linear regression for the FoM was prepared with linear fits, where, in this study, the FoM was the final (MELCOR end time step) cumulative hydrogen mass generation. The dedicated MATLAB-based automatic engine (MelSUA) for generating input decks, sampling, pre-processing, post-processing, and code running with PowerShell scripts were developed. The software is available in an open repository and Supplementary Material to this work [34,43]. Its main advantage is the use of a popular MATLAB environment and powerful statistical toolboxes. It allows using truncated distributions, dozens of different distributions, XML input files, MATLAB matrix-based mathematical functionality, and several other features. Another critical issue for this work and also for MELCOR users are failed code runs. In uncertainty analysis with MELCOR, it is common to obtained failed cases. Unfortunately, as practice shows, it can be a substantial portion of all cases. What is more, the failure statistics can be substantially different with different code revisions for the same model. It is another factor, which is a struggle for SA codes users. In a typical DSA, discussion on how to work with failed cases is presented in a BEMUSE report [37]. It is essential for a regulatory process, as with failed cases, it is difficulty to guarantee tolerance limits, and statistics can be disturbed. However, in SA analysis, the choice of approach is not apparent, as the purpose of the analysis and requirements are different. The reasonable approach, which allows using results obtained with a failed case, is to study FoM and discard or accept failed runs according to its status. In this study, when FoM (hydrogen mass) was not expected to change significantly, which means that the failure occurred after the oxidation phase, we could use results in the analysis. Studied Parameters At the first stage of the analysis, the best-estimate simulations were performed. In the next step, hydrogen-generation-related MELCOR parameters were selected with the assignment of the probability distributions. The primary source of data for parameter selection was the available literature. It was decided to use the experience gained by other researchers to reduce the amount of work necessary during the initial step of the analysis. The principal reference but also inspiration for this study was the SNL report by Gauntt: "An Uncertainty Analysis of the Hydrogen Source Term for a Station Blackout Accident in Sequoyah Using MELCOR 1.8.5" [35] and a related study [44]. They describe and motivate parameters and phenomena selection and contain cumulative distributions. An additional source of parameters were the more recent SNL SOARCA Uncertainty Analysis Report [45] and studies by Gharari et al. [6,8], Galushin and Kudinov [10], and Itho et al. [11]. In this work, we focused on MELCOR parameters responsible for phenomenological modelling, and the best estimate approach, as far as possible, was applied. Initial and boundary conditions were not varied, and it should be treated as an assumption. The probability distribution functions for parameters are presented in Table 2, and each is discussed in details in the following sections. The best estimate parameters are also provided. The plots of distributions for the studied parameters are available in the Appendix A. Zircaloy-Steam Oxidation Correlation-SC1001 Five different correlations for the zirconium-steam oxidation rate coefficient were selected (Table 3 and Figure 6). All were used in parabolic rate equations and were strongly dependent on the temperature. As shown in Figure 6, some differences were present and may have led to differences in mass production. Correlations are defined by proper MELCOR sensitivity coefficients (SC1001). The selection of correlations and interpolations in intermediate regions were inspired by [35,46]. However, the selection process can be assessed to be more or less arbitrary. Alternative selections are available in the literature ( [6,8]). The discrete uniform distribution was applied in the uncertainty analysis, and correlations were assumed to be equally likely. However, the code default correlation was Urbanic-Heidrick, and it was the best estimate selection. Zircalloy Melt Breakout Temperature-SC1131 (2) An oxidized cladding creates a shell that can hold molten materials (Zr and fuel) inside the fuel rod until the ZrO2 thickness limit or temperature limit are violated. Then, the shell is breached and molten mass can candle and relocate. This parameter affects hydrogen production as it affects steam contact with zirconium and impacts flow blockages. The breakout temperature for the cladding component is available with field SC1131 (2). The distribution used in the SNL report ( [35]) was normal-like with a median of 2400 K, in the range 2250-2550 K and SD ~50 K, but in [44] the range 2100-2550 K was considered. In a recent study [6], they considered uniform distribution with the same temperature range of 2250-2550 K. In this work, we applied a triangular distribution the same as in the more recent SOARCA UA for the Surry plant with median, lower, and upper limits equal to 2400 K, 2100 K, and 2550 K, respectively. The MELCOR 2.2 (M2.2) default and SOARCA recommendation were equal to 2400 K, and it was the best estimate value in this work [52,53]. Fuel Rod Collapse Temperature-SC1132 (1) The temperature which causes fuel rods to collapse and form debris in the case when cladding is oxidized and there is no unoxidized Zr, was given by sensitivity coefficient SC1132 (1). It is the extended failure criteria for situations when metallic Zr candled [42,[53][54][55][56]. It can affect accident progression and hydrogen production [35]. The SNL report applied a normal-like distribution with a median of 2550 K and lower and upper Zircalloy Melt Breakout Temperature-SC1131 (2) An oxidized cladding creates a shell that can hold molten materials (Zr and fuel) inside the fuel rod until the ZrO 2 thickness limit or temperature limit are violated. Then, the shell is breached and molten mass can candle and relocate. This parameter affects hydrogen production as it affects steam contact with zirconium and impacts flow blockages. The breakout temperature for the cladding component is available with field SC1131 (2). The distribution used in the SNL report ( [35]) was normal-like with a median of 2400 K, in the range 2250-2550 K and SD~50 K, but in [44] the range 2100-2550 K was considered. In a recent study [6], they considered uniform distribution with the same temperature range of 2250-2550 K. In this work, we applied a triangular distribution the same as in the more recent SOARCA UA for the Surry plant with median, lower, and upper limits equal to 2400 K, 2100 K, and 2550 K, respectively. The MELCOR 2.2 (M2.2) default and SOARCA recommendation were equal to 2400 K, and it was the best estimate value in this work [52,53]. (1) The temperature which causes fuel rods to collapse and form debris in the case when cladding is oxidized and there is no unoxidized Zr, was given by sensitivity coefficient SC1132 (1). It is the extended failure criteria for situations when metallic Zr candled [42,45,[53][54][55][56]. It can affect accident progression and hydrogen production [35]. The SNL report applied a normal-like distribution with a median of 2550 K and lower and upper bounds equal to 2400 K and 2800 K, respectively, defined by experimental considerations and the eutectic melt temperature [35]. In contrast, Gharari applied a collapse temperature between 2400-2700 K with a uniform distribution [6]. In SORCA UA for Surry [53], they applied a normal distribution with a mean equal to 2479 K and a standard deviation of 83 K, and we also applied it in this work. In SOARCA, the recommended value was 2800 K, and it was the default for older code versions [52]. The new M2.2 default value is equal to 2500 K, and it was selected as the best estimate value. Fractional Dissolution of materials-FUOZR and FSXSS Two parameters that identify the fraction of the local dissolution of materials during relocation were studied. The uranium dioxide dissolution in molten zirconium (FUOZR) and molten steel oxide dissolution in molten steel (FSXSS). These are the secondary material transport parameters defined in the COR_CMT card. Selection is motivated by the fact that the decay heat source changes ( [6]), and it affects accident progression and is responsible for core degradation. In this work, for the FUOZR, the truncated normal distribution was fitted to Gauntt data [35], with a given median of 0.2 and limits equal to 0.0 and 0.5. Gauntt justifies these values by the U-Zr-O phase diagram, and similar values were also used by [6]. In Itoh et al. [11], it was shown that the steel oxide transport in molten steel can play a role in the hydrogen production; they applied an FSXSS range equal to 0.6-1.0. This parameter was also studied, with truncated normal distribution, with a median and upper limit equal to 1.0 (the most probable) and a lower limit equal to 0.6. Candling/Refreezing HTC-HFRZZR and HFRZSS In the MELCOR (COR_CHT card), the user defines a set of refreezing/candling heat transfer coefficients. It affects core blockage during the relocation of molten materials, which affects steam flow and oxidation. In the studied references, some ambiguities were present, as all coefficients were part of the same MELCOR field, and typically it is not explicitly defined. In the SNL study [35,44], it was suggested that only the Zr refreezing coefficient was varied. In Gharari et al. [6], it can be deduced that all candling coefficients were modified. In Itoh et al. [11], HTC coefficients for zircalloy and steel (HFRZZR and HFRZSS) were studied, and, similarly, in [10,54], both Zr and steel coefficients were studied. In this work, it was decided to vary the molten zirconium and steel coefficients (HFRZZR and HFRZSS) only. Distributions for both parameters were selected to be truncated lognormal, as in the Gauntt study [35] for Zr. Median values for Zr and Steel were equal to 7500 and 2500 W·m −2 ·K −1 respectively, as both were default values in M2.2. The low and high values for Zr were based on Gauntt and equal to 2000 and 22,000 W·m −2 ·K −1 . In the case of steel, the low value was 1000, and, as in Galushin and the old MELCOR default, the upper limit was 5000 W·m −2 ·K −1 , which was arbitrarily selected as the higher value than the median and was considered reasonable. For simplicity, the coefficients for other materials were assumed to be equal to the best estimate SOARCA, which were, in fact, the defaults for M2.2 [42,52]. The values were the following: 7500 W·m −2 ·K −1 for Zr, ZrO 2 , and UO2 and 2500 W·m −2 ·K −1 for SS, SSOX, and CRP components. In the case of the FPT-1 model presented in this work, a candling HTC modification was introduced, as described in the MELCOR2.1 assessment report for FPT-1 [55]. HTCs for ZrO 2 and UO 2 were modified and equal to 20,000 W·m −2 ·K −1 and 30,000 W·m −2 ·K −1 , respectively. In our previous studies ( [28,29]), we assumed a candling heat transfer of 20,000 W·m −2 ·K −1 for metallic Zr (COR_CHT (1)), and the uncertain parameter was the same as this modified value. It was a mistake; the ZrO 2 (COR_CHT (2)) should have been modified instead. The issue was corrected in the model studied in this work. Debris Diameter-DHYPD and DHYPDLP Two characteristic particulate debris lengths are present in the COR package (COR_EDF card), and both affect the heat transfer, blockages and, in effect, the oxidation calculations. The first is the debris diameter in the core region (DHYPD), and the second is the debris diameter in the lower plenum (DHYPDLP). The parameter range is not obvious because contradictory information is present in the literature. In the SNL reports, [35,44] the core region debris was in a range of 0.2-5 cm, with a median of~1 cm and a log-normal distribution. The median size was justified to be close to a pellet size, and the limits were based on experimental observations. A smaller plenum debris size was log-normal distributed with limits of 1-6 cm and a median of 2.5 cm, which was justified as representing sintered fragments larger than fuel pellets. The same values were used in the VVER reactor study [6]. On the contrary, Galushin [10] studied BWR lower plenum debris diameters in a range of 0.2-0.5 cm, where other experimental data by Magallon and Kudinov justified the selection. In the best practices report [52], the recommended best-estimate diameter for the core region was 1 cm, and for the lower plenum, it was 0.2 cm, and FARO experiments support the second value. Moreover, in [11], the considered debris median size for both the core and the lower plenum was considered to be 0.5 cm. For the core region, a truncated log-normal distribution was fitted to Gauntt data [35,44] with a median of 1.2 cm, and upper and lower limits were equal to 2 mm and 5 cm, respectively. The selection was not evident for the lower plenum region; a truncated log-normal distribution with a median of 0.5 cm and a range variability of 0.2-6 cm were selected, with the low value corresponding to a SOARCA recommendation and the high value being the maximum diameter provided by Gauntt [35]. The median value was a compromise between more recent studies based on experimental data and code default with low values and older Gauntt data, where the justification for larger particles was reasonable. Debris Porosity-PORDP Core debris porosity affects the heat transfer of debris, coolability, blockage formation, and oxidation. The best estimate, code default, and SOARCA value were all equal to 0.4. In this study, the truncated normal distribution was fitted to Gauntt data with a median of 0.38 and limits equal to 0.1-0.5 (the limits and the median value were provided explicitly in [35,44]). The distribution type was not provided, the upper limit represents an unstable bed, and the lower limit was not realizable for the packing of solid particles [35,44]. Gharari assumed a log-normal distribution with the same parameter range and the same median. Galushin and Kudinov applied the reduced range of 0.3-0.5 without information about distribution [10]. Radiation Exchange Factors-FCELA and FCELR As it was motivated by Gauntt, radiation exchange factors for radial (FCELR) and axial (FCELA) exchange between COR package cells affects hydrogen production [35,44]. Gharari also applied these parameters, but it was not used by Galushin (the study was not focused on hydrogen). In the SNL study [35,44] In this work, a truncated-normal distribution was applied for the axial factor (FCELA), with a median of 0.1, a lower boundary of 0.02 (identical as Gauntt), an upper boundary of 0.3 (as in Gharari), and a corresponding standard deviation of~0.035. For the radial factor (FCELR), different distributions were applied for PWR and FPT-1. The MELCOR assessment report justifies that this parameter has to be more significant for the Phebus facility than for a typical NPP [55]. Consequently, for the PWR reactor, a normal-like distribution was applied with parameters the same as for FCELA. On the contrary, for FPT-1, it was a normal-like distribution with a median equal to the recommended value of 0.75, arbitrarily selected upper and lower boundaries set to 0.5 and 1.0, respectively, and a standard deviation of 0.1. The current best estimate, code default, and SOARCA values were all equal to 0.1 for both parameters, and these were applied for PWR best estimate model. In the best estimate FPT-1 model, the FCELR 0.75 and FCELA 0.1 values were applied. In older MELCOR versions, default values were equal to 0.25. In-Vessel Falling Debris HTC-HDBH2O Debris fragments falling in the lower plenum transfer heat to the surrounding water and it is modelled by in-vessel falling debris HTC parameter (field HDBH2O in COR_LP card). Eventual fragmentation and steam generation are factors that can enhance oxidization and influence hydrogen generation. It is only significant for the PWR reactor model. For this parameter, there is no consensus in the applied literature. The default value for M2.2 is currently 100 W·m −2 ·K −1 ; earlier in SOARCA [52], the recommendation was 2000 W·m −2 ·K −1 . In our modelling, we assumed the best estimate value to be 2000 W·m −2 ·K −1 . Gauntt reports [35,44] a range between 125-400 W·m −2 ·K −1 , which was applied without explicit information about the distribution. Using the fitting, we found that it was likely a triangular distribution with a mode equal to 150 W·m −2 ·K −1 . Itoh applied 100 W·m −2 ·K −1 with a normal distribution and an SD equal to 10. Galushin, in [54], applied values in the range of 200-2000 W·m −2 ·K −1 . Gharari applied a normal-like distribution with a range of 100-400 W·m −2 ·K −1 , and the default value was 400 W·m −2 ·K −1 . In this work, we assumed a triangular distribution with a lower boundary of 100 W·m −2 ·K −1 , corresponding to an M2.2 default value and an upper boundary of 2000 W·m −2 ·K −1 , which was equal to SOARCA recommendation and the Galushin upper value [54]. The lower boundary from [54], equal to 200 W·m −2 ·K −1 , was selected as a mode (most probable value) for the triangular distribution. Time-at-Temperature Model-IRODDAMAGE The time to fuel rod collapse model (time-at-temperature) was applied because it is recommended [52]. It controls core degradation and can affect hydrogen generation. It is defined by the COR_ROD card and the IRODDAMAGE field, and it is dedicated to simulating fuel failure at prolonged high-temperature cladding conditions. Three alternative models were applied and taken from the Peach Bottom SOARCA UA [45,56] and are reproduced in Table 4. Model #0 is based on SOARCA recommendation (bestestimate model) [52], and two others are with temperatures reduced or increased by 100 K. In this study, similarly to the reference report, the discrete distribution was applied with a probability of 0.8 for the basic model and a probability of 0.1 for the two other models. The interactive (INT) model was applied to mimic the eutectic temperature of the ZrO 2 -UO 2 binary mixture. The temperature of the eutectic reaction affects the dynamics of blockage by candling materials. When this model is not used, the melting point of the material is provided by the enthalpy step change at the pure materials melting point, where, for ZrO 2 , the melting temperature is about 2990 K, and for UO 2 , it is 3113 K. For the INT model, two unique materials were introduced, UO2-INT and ZRO2-INT with an eutectic temperature of 2500 K, as recommended by SNL [52]. This temperature was also applied as the best estimate value. It is defined in the TMLT field in the MP_PRC card used to define new materials and model in the COR_MAT card. In previous research [29] for FPT-1, we applied different tables based on the MP_PRTF card; here, it was simplified with MP_PRC as discussed in [57]. In this work, we assumed that the eutectic temperature (TMLT) has the same values as the fuel rod failure temperature (SC1132 (1)), with the same distribution and limits. Hence, this parameter was not directly sampled. This assumption was introduced and motivated in the SOARCA uncertainty study for Surry NPP [53]. The recent M2.2 contains a new, more mechanistic eutectic model (COR_EUT). It was tested in this work, but the presented PWR model had problems to properly converge, with a large portion of models failing to converge during calculations. In order to preserve accurate statistics, we decided that the interactive model was enough for the presented study, as using it is still the current practice and popular approach. Maximum Melt Flow Rate after Breakthrough-SC1141 (2) The core melt breakthrough candling parameter-the maximum melt flow rate per unit width after breakthrough-was studied by some researchers and indicated as influential. It was studied by Itoh [11] and shown to be important for hydrogen in BWRs with M1.86. It was also studied in SOARCA studies, and Kudinov and Galushin considered it in their studies of the lower head [10]. Because of the lack of valid data about distribution, we selected a uniform distribution for a range of parameters suggested by Galushin-0.1-2.0 kg·m −1 ·s −1 . In the SOARCA recommendations, the SC1141(2) was equal to 0.2 kg·m −1 ·s −1 , but the new M2.2 default value was 1.0 kg·m −1 ·s −1 , and this value was applied as a best estimate [52]. BE Results for FPT-1 The best estimate (BE) results for the FPT-1 were compared with two experimental datasets ( [18,29,30,58]) and selected best-estimate results found in the literature-see Figure 7 (Left). The final hydrogen mass provided in ISP-46 was equal to 96.7 g ± 10%, and the best estimate result was~12% higher and equal to 108.0 g. Studying Figure 7 (Left), we can observe that the hydrogen production kinetics before the oxidation peak (~11,000 s) was simulated accurately. In the experiment, an oxidation peak corresponded to~50 g of H 2 , and then we can observe the decrease in the oxidation rate. Similar behavior was present for the best estimate MELCOR model, but the oxidation peak ended at~80 g (see Figure 7). A reduction in the oxidation occurred because of blockages of the flow, and it was also the case for PWRs; see [14,59,60]. The best-estimate model provides reasonable results close to the upper experiment uncertainty limit, and the results are comparable to the literature results for M2.2 and M2.1 (see Figure 7). Moreover, we can see that the BE model used in this work provided almost the same results as the previous model calculated with MELCOR 2.2.11 [29], despite updates in heat-transfer parameters and deactivation of the RN package and the silver release model. It was observed that the zirconium oxidation was responsible for 98% of the total hydrogen. Investigations of the FPT-1 model, including thermal-hydraulics, are available in previous works [28,29]. [29]. (Right)-PWR hydrogen generation for two studied cases and Zr-only generation. BE Results for PWR The BE results for the PWR reactor, for both the peaked and the FPT-like power profiles, are presented in Figure 7(Right). The difference between these two cases can be observed with the final H2 masses equal to 286 kg and 232 kg for the peaked and the FPTlike, respectively. Interestingly, the process was more rapid for the top peaked case, and at the same time, more H2 mass was produced. The difference was due to the power profile and the location of the peak (see Figure 2). The peak defines the place where the first [29]. (Right)-PWR hydrogen generation for two studied cases and Zr-only generation. BE Results for PWR The BE results for the PWR reactor, for both the peaked and the FPT-like power profiles, are presented in Figure 7(Right). The difference between these two cases can be observed with the final H 2 masses equal to 286 kg and 232 kg for the peaked and the FPT-like, respectively. Interestingly, the process was more rapid for the top peaked case, and at the same time, more H 2 mass was produced. The difference was due to the power profile and the location of the peak (see Figure 2). The peak defines the place where the first blockage occurs. When it is located at a lower elevation, less oxidation occurs because of reduced steam access to unoxidized upper parts of the core. For the top-peaked case, the power across the core has a low profile, and fewer blockages are expected at the lower part, which allows more oxidation globally. The top-peaked model produced~24% (~55 kg) more hydrogen. The generation kinetics were slightly different in comparison to the FPT-1. After the occurrence of the oxidation peak and blockage, only a slight amount of hydrogen was generated (less than 10%), contrary FPT-1, for which more than 30% of H 2 was produced after the peak (Figure 7). In general, Zirconium is responsible for 98% and 97% of the hydrogen mass for the top-peaked and the FPT-like cases, respectively. For the Phebus model, the oxidation was much more efficient considering the mass of available Zr (see Table 1), which was~70%, but, for PWR, it was only~13% and~16%. The core degradation for PWR was very fast, and no forced flow was present. The experimental flow was forced, and power and other conditions were controlled. The obtained mass of H 2 may seem low compared to alternative MELCOR computations presented in the literature, and it is worth investigating. The literature was reviewed, and selected available results for severe accidents in PWR reactors are collected in Table 5. It is not easy to compare results for different reactors, studies, and code versions. Limited data about the plant designs are available, user effects exist, and there are substantial differences between code releases, code nodalizations, and applied practices or default values. Despite that, a simple indicator was introduced-the ratio between in-vessel hydrogen mass generation and total core thermal power. Large cores are expected to be able to produce more hydrogen because of the larger mass of zirconium. Thermal power is directly related to the core size, and, to some extent, it is proportional to the mass of zirconium in the core and can be considered characteristic of the core. The oxidation efficiency concerning Zr mass or maximum H 2 would be a better indication, but proper design details are usually unavailable. In Table 5, it can be observed that the SBO or SBLOCA produces more hydrogen than LB-LOCA. It should have been expected because of a longer accident time and easier access to water/steam during the oxidation phase, eventual sources of additional water, and, in general, a slower core degradation progression. The studied MELCOR results for LB-LOCA predicted between 0.04-0.11 kg H 2 /MWth, and our results were within this range. In principle, it is more difficult to assess more recent MELCOR versions for LB-LOCA as limited data are available. In the recent and extensive PWR reactor study [68], the COR package nodalizations were studied for LB-LOCA in a three-loop Westinghouse type plant, and similar values were predicted in the range of 0.07-0.1 kg H 2 /MWth. For the BE results with a hydrogen-to-power ratio equal to 0.05 and 0.06, our model predicted a low hydrogen mass but was still comparable to other LB-LOCA simulations. Uncertainty Analysis Uncertainty analysis was performed for datasets with all successfully calculated cases (grey lines) and partially successful (red lines) cases screened for hydrogen mass convergence (FoM)-see Figures 8 and 9. The screening limit for PWR was 2500 s, and for FPT-1 it was 16,500 s. Limits were selected for the time when most of the oxidation was completed while also considering code failures. Best estimate case (black line), mean value (blue line) based on interpolation and extrapolation for both fully and partially successful cases, fully successful cases (grey lines), and partially successful cases (red lines). Partially successful are cases that converged after the time limit-the 2500 s for PWR (394 (left) and 398 (right)). Box-charts are presented with the central bar being the median value for FoM, while the lower and upper quartiles (25-50%) are presented by a box, and the minimum and maximum values that are not outliers are presented by black whiskers and outliers with blue circles. For PWR cases (Figure 8), the first that can be observed was the significant impact of the power profile, and the relative difference between medians, for both models, was ~25% (~69 kg), which was similar to the difference observed for BE cases (Figure 7). For the PWR with an FPT-like profile, the final mean was 279 kg, and the median was 277 kg with lower and upper quartiles in the range of 257-299 kg and whiskers in the range of 225-360 kg. The best estimate result equal to 232 kg was below the two-quartile region and substantially below the median. Outliers were observed only for the upper values with the highest value of 392 kg, and the lowest observed value was 225 kg. In the case with a top-peaked power profile, the final mean and median were 348 kg and 346 kg, respectively. The quartile box (upper and lower quartiles around the median) was in range of 319-374 kg and the whiskers were in the range of 237-457 kg. The best estimate For PWR cases (Figure 8), the first that can be observed was the significant impact of the power profile, and the relative difference between medians, for both models, was 25% (~69 kg), which was similar to the difference observed for BE cases (Figure 7). For the PWR with an FPT-like profile, the final mean was 279 kg, and the median was 277 kg with lower and upper quartiles in the range of 257-299 kg and whiskers in the range of 225-360 kg. The best estimate result equal to 232 kg was below the two-quartile region and substantially below the median. Outliers were observed only for the upper values with the highest value of 392 kg, and the lowest observed value was 225 kg. In the case with a top-peaked power profile, the final mean and median were 348 kg and 346 kg, respectively. The quartile box (upper and lower quartiles around the median) was in range of 319-374 kg and the whiskers were in the range of 237-457 kg. The best estimate result, 286 kg, was in the lower values region and out of the quartiles box, significantly below the median, similarly to the previous case. A few outliers above and one below were observed; the maximum value predicted was 496 kg, and the minimum was equal to 220 kg. Interestingly, the FoM mean value was very close to the median value for all cases. For the FPT-1 model, the final median FoM was 95.9 g, being very close to the experimental value (96.7 g,~0.8% difference). The mean value was 92.7 g, the minimum and maximum values were equal to 56.9 g and 112.1 g, respectively, and the 50% percentile box was in the range of 82.1-103.7 g. It is worth observing that the BE result was out of the quartiles box and even close to the maximum value obtained in the analysis. In order to compare cases, the FoM was normalized by the final mean value, and time was normalized by the end of the oxidation time; outcomes are presented in Figure 10. For the PWR reactor, it can be observed that the shape is practically the same for both models, and observed uncertainties had the same magnitude being close to~20-25% of the median value for the 95% percentile band, and the 50% percentile band had less than a 10% band around the median. In the case of FPT-1, we observed a similar outcome. The 95% percentile was within 20-25% of the mean value, and 50% was within the~10% band around the median value. For the PWR reactor, it can be observed that the shape is practically the same for both models, and observed uncertainties had the same magnitude being close to ~20-25% of the median value for the 95% percentile band, and the 50% percentile band had less than a 10% band around the median. In the case of FPT-1, we observed a similar outcome. The 95% percentile was within 20-25% of the mean value, and 50% was within the ~10% band around the median value. The properties of the distributions were confirmed by normalized empirical cumulative distribution functions (ECDF)- Figure 11. We observed the normal-like distributions for both PWR reactor results-with the mean equal to the median being a characteristic of a normal distribution. Nevertheless, for the FPT-1, we observed different, non-normal behavior. At first sight, Figure 11 suggests that it is likely that the FoM deviation from a normal distribution is caused by the fact that the screening limit was too artificial and that a larger portion of cases failed, which would cause deviation in the statistics from the normal distribution. However, studying earlier points in time, when practically all cases were successful, we also observed similar deviations from a normal distribution (which can also be observed in Figure 10 (Right)). The source of this discrepancy was not identified, but we should remember that the PWR scenario can be considered simpler than the FPT-1 experiment sequence. The issue of failed cases is elaborated below as it is an important practical problem for severe accident U&SA. The properties of the distributions were confirmed by normalized empirical cumulative distribution functions (ECDF)- Figure 11. We observed the normal-like distributions for both PWR reactor results-with the mean equal to the median being a characteristic of a normal distribution. Nevertheless, for the FPT-1, we observed different, non-normal behavior. At first sight, Figure 11 suggests that it is likely that the FoM deviation from a normal distribution is caused by the fact that the screening limit was too artificial and that a larger portion of cases failed, which would cause deviation in the statistics from the normal distribution. However, studying earlier points in time, when practically all cases were successful, we also observed similar deviations from a normal distribution (which can also be observed in Figure 10 (Right)). The source of this discrepancy was not identified, but we should remember that the PWR scenario can be considered simpler than the FPT-1 experiment sequence. The issue of failed cases is elaborated below as it is an important practical problem for severe accident U&SA. For the PWR with the top peak profile, 340 cases were calculated fully successfully (87.5%); 54 partially successfully, considering the FoM limit (the 2500 s) success ratio was 98.5% (394); and only 6 cases failed. For the case with the FPT-like profile, 333 cases were successful (83.25%), and 65 cases succeeded partially; 99.5% of cases reached the assumed oxidation time limit. Figure 8 indicates that most of the partially successful cases failed much later than the end of oxidation. The situation was worse in the case of FPT-1; only 212 (53%) cases were fully successful, 128 (32%) were partially successful (totally 85%), and 60 cases failed. Because many failed cases were near the limit (see Figure 9), we can conclude that the final results are For the PWR with the top peak profile, 340 cases were calculated fully successfully (87.5%); 54 partially successfully, considering the FoM limit (the 2500 s) success ratio was 98.5% (394); and only 6 cases failed. For the case with the FPT-like profile, 333 cases were successful (83.25%), and 65 cases succeeded partially; 99.5% of cases reached the assumed Energies 2021, 14, 4884 19 of 28 oxidation time limit. Figure 8 indicates that most of the partially successful cases failed much later than the end of oxidation. The situation was worse in the case of FPT-1; only 212 (53%) cases were fully successful, 128 (32%) were partially successful (totally 85%), and 60 cases failed. Because many failed cases were near the limit (see Figure 9), we can conclude that the final results are slightly underestimated and that the statistics deviate. Most of the failures occurred at the moment of bundle power reduction (effectively) to zero, when no more hydrogen was produced. However, we failed to find the reason for this effect. Typical procedures like the reduction in maximum and minimum time-steps were applied without success. Despite that, the FoM was expected to be close to the converged state, but it was slightly underestimated in at least part of the failed cases, as can be deduced by studying Figure 9. The reader should be aware of this fact and should have a proper margin of confidence. It was assessed that the introduced error was less than 5 g (~5%) of the FoM for partially successful cases. Moreover, when simulations (not reported here) were repeated with an older code version, different outcomes were observed. For the PWR model, when M2.2.15 was used, it failed in substantially more cases than for the recent revision M2.2.18. On the contrary, when the FPT-1 model was calculated with, e.g., M2.2.11, it was more efficient in terms of successful cases. Despite that, in this work, we decided to use only the most recent M2.2.18 to maintain consistency between PWR and FPT-1 results. MELCOR users should be aware of the observation that the different code versions can provide different results. Sensitivity Analysis Interpretation of correlation coefficients is not obvious and different practices are present in publications and in various fields. In the literature, it is possible to find various limits for levels of correlation (e.g., [69][70][71][72][73][74]). In principle, each coefficient can also have different limits. For simplicity and clarity of the analysis, we assumed the same levels of correlation for all considered coefficients, and a similar approach was used in [69,72,75]. We assumed that |ρ| < 0.2 indicates no reasonable correlation (or a very weak correlation), and it was also applied by Itoh and Zheng [11,75] for various coefficients, by Freixa et al. [74] for Pearson, and by Joo at al. for Spearman [71]. The assumed limit for low and moderate correlations was |ρ| < 0.4 and |ρ| < 0.7, respectively, and it was based on Joo [71] and Schober [72]. In some special circumstances when the p-value was very low or similar correlation patterns were observed for PWR and FPT-1, we considered a very weak correlation for |ρ| > 0.1 and |ρ| < 0.2. However, these cases should be treated with a large margin of confidence, and they are rather unreasonable correlations. In some references, researchers consider weak correlation for cases with |ρ| > 0.1 [72]. Figures 12 and 13 presents three different correlation coefficients: linear Pearson, non-linear Spearman rank, and Kendall coefficients for all studied uncertainty parameters. Figures 14 and 15 present a normalized FoM as a function of uncertainty parameters with linear-regression-generated trendlines. Thanks to normalization by the final FoM mean, the comparison is more comprehensible. It should be highlighted that for the discrete parameters SC1001 and IRODDAMAGE correlation coefficients have no meaning. In this work, the correlation was considered statistically significant when p < 0.05. However, the reader should be aware that different practices can be found in the literature. In principle, a high ρ with a low p indicates a correlation. In the case of the peaked PWR, we can observe (Figure 12 (left)) that the negative weak correlation exists for the porosity (PORDP) and the core region hydraulic diameter of debris (DHYPD). It is also supported by Figure 14 (blue curves, #5 and #7). We can attempt to explain it by the fact that the smaller debris particles have a larger area of contact with their surroundings, and oxidation is more likely, leading to a larger hydrogen generation. The decrease in porosity may, counterintuitively, lead to a higher hydrogen mass. On the one side, a larger porosity should allow steam to access and oxidize debris more easily but also to cool it more effectively. However, a larger porosity allows steam to rapidly leave the debris surroundings and decrease the chance of contact. What is also important is that lower porosity should lead to higher temperatures, and because oxidation is driven by temperature-dependent parabolic kinetics, more violent reactions should occur. Unfortunately, this type of explanations fail when compared with the PWR case with an FPT-like profile, which shows no correlations (Figure 12 (right) and Figure 14)), and, in general, no negative type correlation was observed. The power profile shifted towards lower portions of the core, which simply do not allow these effects to emerge at a proper scale. correlation patterns were observed for PWR and FPT-1, we considered a very weak correlation for |ρ| > 0.1 and |ρ| < 0.2. However, these cases should be treated with a large margin of confidence, and they are rather unreasonable correlations. In some references, researchers consider weak correlation for cases with |ρ| > 0.1 [72]. Figure 12 and Figure 13 presents three different correlation coefficients: linear Pearson, non-linear Spearman rank, and Kendall coefficients for all studied uncertainty parameters. Figures 14 and 15 present a normalized FoM as a function of uncertainty parameters with linear-regression-generated trendlines. Thanks to normalization by the final FoM mean, the comparison is more comprehensible. It should be highlighted that for the discrete parameters SC1001 and IRODDAMAGE correlation coefficients have no meaning. In this work, the correlation was considered statistically significant when p < 0.05. However, the reader should be aware that different practices can be found in the literature. In principle, a high with a low p indicates a correlation. fully when someone attempts to draw any general conclusions about modelling parameters sensitivity. The same model can provide different results when different BC are used and can eventually lead to opposite observations. We can postulate that the researchers should not draw too general conclusions based on this type of analysis. Moreover, it was observed that the existence of a correlation for the experiment does not guarantee that we will observe the same correlation for plant scale simulations. . In principle, obtained correlations are partially in agreement with H2-focused studies by Gharari and Gauntt. In Gharari [6], (Figure 11) for the in-vessel (VVER) hydrogen production, they observed slight positive correlations for Zr melt release SC1131 (2) and debris particle size in the lower plenum (PORDPLP) and slight negative correlations for axial radiation exchange and falling debris HTC. In the Gauntt report [35,44], only moderate or weak correlations were observed for the in-vessel hydrogen generation with positive correlations on Zr melt release and candling HTC, and a negative dependence was observed on falling debris HTC and lower plenum debris. In this work, no significant correlations were observed for falling debris and lower plenum debris size. In Figure 12 (left), for the peaked profile case, we can observe a moderate positive correlation for SC1131 (2) (zircalloy melt breakout temperature). It shows that the delay of candling affects hydrogen production because of the physical separation of steam and Zr. In the case of the FPT-like profile (Figure 12 (right)), for SC1131 (2), correlation is much less significant and was below the weak correlation limit; but, its p-value was almost zero, and it can be considered as a very weak correlation. We can observe that correlations were stronger for the peaked profile (see #1 in Figure 14). Conclusions Similarly, for fully correlated variables SC1132 (1) (rod collapse temperature) and TMLT (eutectic temperature), a correlation with FoM is visible (#2 and #13 Figure 14). It is much stronger and above the weak correlation limit for the FPT-like profile but slightly below the limit (very weak) for the peaked profile ( Figure 12). The impact was expected as this parameter affects candling and blockage occurrence. Considering discrete parameters, the oxidation correlations (SC1001) have a similar spread of results (see Figure 14, #11), with Baker-Just (#2) being an exception, with apparently higher hydrogen production for both PWR models. Parameter #12 is inconclusive; the basic time-at-temperature model was much more probable than others. In the case of the FPT-1 model (Figure 13), a very weak (|ρ| > 0.1 and |ρ| < 0.2) negative correlation with p < 0.05 can be observed for SC1141 (2), and it is also present for PWR with an FPT-like profile but did not appear for the peaked PWR case. Positive very weak correlations with p < 0.05 existed for PORDP and DHYPD, and, on the contrary, they were not significant for PWR with the FPT profile (but, they existed for the peaked profile). For almost all parameters for FPT-1, the rho-value was lower than 0.2, and only the Spearman coefficient for DHYPD was larger than 0.2; so, it is the only reasonable correlation for this model. We can observe the effects for all parameters by studying the linear regression results in Figure 15. Considering the discrete variables (#11 and #12), we can observe no evident effect for the time-at-temperature model; but, there was an impact of oxidation correlation (SC1001). Models 3 and 4, both using Prater-Courtright, were responsible for less hydrogen than other cases. The main observation of the sensitivity analysis was the dependence of the results on the power profile being the boundary condition for the PWR model. It is an important lesson showing that the typical approach to sensitivity analysis should be used very carefully when someone attempts to draw any general conclusions about modelling parameters sensitivity. The same model can provide different results when different BC are used and can eventually lead to opposite observations. We can postulate that the researchers should not draw too general conclusions based on this type of analysis. Moreover, it was observed that the existence of a correlation for the experiment does not guarantee that we will observe the same correlation for plant scale simulations. In principle, obtained correlations are partially in agreement with H 2 -focused studies by Gharari and Gauntt. In Gharari [6], (Figure 11) for the in-vessel (VVER) hydrogen production, they observed slight positive correlations for Zr melt release SC1131 (2) and debris particle size in the lower plenum (PORDPLP) and slight negative correlations for axial radiation exchange and falling debris HTC. In the Gauntt report [35,44], only moderate or weak correlations were observed for the in-vessel hydrogen generation with positive correlations on Zr melt release and candling HTC, and a negative dependence was observed on falling debris HTC and lower plenum debris. In this work, no significant correlations were observed for falling debris and lower plenum debris size. Conclusions In this work, simulations focused on hydrogen generation during a low-pressure LB-LOCA sequence were studied for both a large generation-III PWR reactor and integral experiment Phebus FPT-1. The FPT-1 experiment was analyzed (see also [28,29]), and subsequently, its outcomes were applied to the analysis of the PWR reactor. The scope was limited to a severe accident during the in-vessel phase before the RPV failure. The Phebus FPT-1 experiment model was updated with MELCOR 2.2.18. The best estimate simulations were compared with alternative solutions, and reasonable agreement was observed. Similarly, PWR reactor simulations were performed, and obtained hydrogen masses were considered low but comparable with the literature data and assessed as reasonable. The dedicated sensitivity and uncertainty analysis methodology and proper MATLABbased open-software were developed and tested in this work. The uncertainty analysis was performed for sixteen different parameters selected after the literature review. Overall, more than 1200 MELCOR runs were executed, 400 per each studied case. Uncertainty analysis showed that, despite the differences in kinetics, and scale, the final variation of FoM (hydrogen mass) was similar in both FPT-1 and PWR. The 95% percentile band had boundaries being ±~25% value of the mean around the median, and the 50% percentile was within the 10% band around the median (see Figure 10); it was also indicated by empirical CDFs, which had very similar shapes for all cases (see Figure 11). It showed that the variation predicted by the experiment had a similar magnitude as the variation predicted at the plant scale. Moreover, it was observed that the median values were close to the mean, and both were substantially different from the best estimate result obtained with the recent MELCOR best practices. Analysis of best-estimate parameters clearly showed that default and bestpractices substantially changed during the years, and it introduced variation into the results obtained during different code versions and different practices. This work observed that the best estimate FPT-1 models predicted the FoM being larger than the experiment and substantially larger than the estimated median. The opposite was true for the PWR plant scale, where the best estimate results were substantially lower than the median. For the PWR reactor, moderate or weak correlations were observed for debris porosity, debris hydraulic diameter in the core (both only for the peak-profile), the rod collapse/melting temperature (only the FPT-like profile), and the Zr melt breakout (only the FPT-like profile). Some dependence was observed for the oxidation correlation selection. Very weak correlations can be considered for the melt breakout temperature (FPT-like profile) and for some other parameters, but they should be considered with a large margin of confidence. For PWR models, moderate or weak correlations were present only for a few parameters. For the FPT-1, all coefficients were below the weak correlation limit. An exception was the debris diameter with a weak correlation but only for the Spearman coefficient. Eventually, a very weak correlation, below the 0.2 limit, could be considered for the debris porosity and the melt flow rate after the breakthrough. Similar uncertainty parameters were indicated by Gharari and Gauntt studies for other reactors but typically with different correlation levels. In the literature, a large variation of hydrogen production can be found among different computer codes (like MELCOR vs. MAAP, e.g., see [16]) but also among the same code results (see Table 5). Different results can be obtained with different models, code versions, or accident scenarios, which may sometimes confuse the analyst. This work shows that even for the same model and code version, large variations can be observed when modelling parameters are changed and when significant boundary conditions are varied (e.g., power profile). Different uncertainty parameters can be more significant, and some can lose significance. In this research, the example of this was an observation of the role of a power profile change, where the observed difference for BE cases was~25% in the mean FoM. It introduced an additional level of difficulty to study S&UA for different reactors, and it demands further research. Moreover, this study was focused only on modelling parameters, but, for full-scale uncertainty analysis, initial and boundary conditions should be investigated, and it should also be a matter of future research. Regarding selection of uncertainty parameters, we applied outcomes of other studies, but we believe that a more systematic approach should be executed in the future. Proper guidelines for each MELCOR modelling parameter can be developed, but it demands a larger effort, likely selecting a large group of parameters and then systematically selecting important parameters on the basis of analytical studies and not on the basis of other research. A similar approach was proposed by Itoh [11]. Moreover, a sample of 400 inputdecks is large, but we believe that larger samples could be used in the future with more detailed studies, larger than 1000, and similar to SOARCA [76]. As was indicated at the beginning of the section, the uncertainty analysis approach worked appropriately. The approaches based on Monte Carlo (eventually Wilks) allowed us to obtain confidence intervals, which is their significant advantage. However, in this study, the typical approach to sensitivity analysis proved to be not as efficient. We observed that using relatively simple and standard statistical tools for correlations (linear regression with Spearman or Pearson) do not provide conclusive answers about sources of uncertainty. These methods are very popular and were used by several researchers (e.g., [6,9]). It is difficult to indicate the main source of the observed variation in the final FoM. Even though the same shapes of hydrogen production were predicted for PWR, different parameters were indicated to be important with opposite correlations. In general, we can see that there is no single parameter responsible for the observed differences. We can speculate that the observed large variation in hydrogen generation is a cumulative effect of all varied parameters, as no single strong effect was observed. Additionally, in this study more than 300 runs were applied for a sensitivity study, but, for a typical approach, researchers use 59 or 93 runs, and it will likely magnify the observed issue. Removal of failed cases is a problem for LHS sampling. The LHS method demands knowledge about the number of samples before the actual sampling procedure. In consequence, fully failed cases, which are inevitable with MELCOR, pose a problem for input space stratification. This topic is also discussed in more detail in [45,53]. Fortunately, in this work, fully failed cases were a minor percentage of all results, and bias was expected to be low. Nevertheless, considering this problem for future research, we recommend using SRS for studies with the number of input decks as large as those studied in this work. Discussion of sensitivity indicates that more sophisticated statistical tools and methods should be considered. In order to identify sources of uncertainty, the global uncertainty and sensitivity methods can be used, and they will be applied in future research [10,23,24,26]. It can be expected that the effects are higher-order and that simpler first-order analysis, applied here, and common among other researchers is not enough. In the future, more advanced methods are necessary to indicate the source of the observed uncertainties. this meeting several topics connected to this work were discussed). We would like to thank the anonymous reviewers for their valuable comments and suggestions.
Reduced expression of the glucocorticoid receptor in the hippocampus of patients with drug‐resistant temporal lobe epilepsy and comorbid depression Abstract Objective Depressive disorders are common among about 50% of the patients with drug‐resistant temporal lobe epilepsy (TLE). The underlying etiology remains elusive, but hypothalamus‐pituitary‐adrenal (HPA) axis activation due to changes in glucocorticoid receptor (GR) protein expression could play an important role. Therefore, we set out to investigate expression of the GR in the hippocampus, an important brain region for HPA axis feedback, of patients with drug‐resistant TLE, with and without comorbid depression. Methods GR expression was studied using immunohistochemistry on hippocampal sections from well‐characterized TLE patients with depression (TLE + D, n = 14) and without depression (TLE − D, n = 12) who underwent surgery for drug‐resistant epilepsy, as well as on hippocampal sections from autopsy control cases (n = 9). Video–electroencephalography (EEG), magnetic resonance imaging (MRI), and psychiatric and memory assessments were performed prior to surgery. Results Abundant GR immunoreactivity was present in dentate gyrus granule cells and CA1 pyramidal cells of controls. In contrast, neuronal GR expression was lower in patients with TLE, particularly in the TLE + D group. Quantitative analysis showed a smaller GR+ area in TLE + D as compared to TLE − D patients and controls. Furthermore, the ratio between the number of GR+/NeuN+ cells was lower in patients with TLE + D as compared to TLE − D and correlated negatively with the depression severity based on psychiatric history. The expression of the GR was also lower in glial cells of TLE + D compared to TLE − D patients and correlated negatively to the severity of depression. Significance Reduced hippocampal GR expression may be involved in the etiology of depression in patients with TLE and could constitute a biological marker of depression in these patients. | INTRODUCTION Depression disorders are among the most common psychiatric comorbid conditions in patients with drug-resistant temporal lobe epilepsy (TLE). Its prevalence ranges between 30% and 35%, reaching the highest prevalence (50%) at specialized epilepsy centers. 1,2,3 Furthermore, a history of depression is frequently found in patients with epilepsy, 4 and a positive correlation between the development of seizures and depressive-like symptoms has been demonstrated in various animal models of epilepsy, suggesting a bidirectional relationship between depression and epilepsy. 3,5,6 Comorbid depression is further associated with a poor quality of life, increased suicidal risk, higher medical costs, and an increased risk of developing drug-resistant epilepsy. 3,7 The underlying pathogenic mechanisms remains unknown, but alterations in hippocampal neuroplasticity due to an increased activity of the hypothalamus-pituitary-adrenal (HPA) axis is frequently observed in major depression and likely constitutes a common pathway that may also be disturbed in the combination of TLE and depression. Hence, disturbances in HPA axis activity have been implicated as a possible pathogenic mechanism underlying the association between both pathologies. 5,8,9 In clinical practice, epilepsy surgery is performed on patients with severe, drug-resistant TLE. The most common histopathological alteration found in these patients is hippocampal sclerosis. 10,11 The hippocampus is further particularly sensitive to glucocorticoids (GCs), important steroid hormones released from the adrenal gland after stress. Both, mineralocorticoid receptors (MRs) and glucocorticoid receptors (GRs) are highly expressed in various subregions of the hippocampus, 12,13 and their activation following GC binding exerts negative feedback inhibition of HPA axis activity. 14,15 Hippocampal GRs further regulate neuronal excitability, [16][17][18] and particularly in the dentate gyrus, they are involved in neuroplasticity and neurogenesis. 14 It has been reported that high glucocorticoids levels induced by stress can affect epilepsy and may increase seizures. 15,16 For example, corticosterone hypersecretion has been found after status epilepticus in rodents. 17 In addition, corticosterone administration and experimental stressors enhance neuronal excitability in the hippocampus, 16,18 changes that may be reversed with GR and MR antagonists. 19 Furthermore, patients with TLE who are exposed to a psychosocial stress challenge have higher levels of cortisol, 20 whereas stressful events and particularly early life stressors increase the risk for seizures. 19 Furthermore, a persistent HPA axis hyperactivity has been observed after seizures in patients with epilepsy, suggesting an impairment of the inhibitory control of the HPA system. 9 Several groups have investigated the expression of GRs in animal models with depression and/or chronic stress, 12,21,[22][23][24][25] and a few studies have been done on GRs in postmortem hippocampal tissues from patients with depression. 13,[25][26][27][28][29] So far, only a few addressed hippocampal GR expression in experimental models of epilepsy 29 and/or (resected) brain tissue from patients with epilepsy. 31,32 To the best of our knowledge, the GR has not been studied in the hippocampus of patients with epilepsy and depression; we therefore set out to study for the first time hippocampal expression of GRs in a well-characterized cohort of patients with drug-resistant TLE, with and without comorbid depression, as well as in control subjects. Significance: Reduced hippocampal GR expression may be involved in the etiology of depression in patients with TLE and could constitute a biological marker of depression in these patients. K E Y W O R D S chronic stress, dysthymia, hypothalamus-pituitary-adrenal axis, major depression Key Points • Depression is the most common psychiatric comorbidity in patients with drug-resistant temporal lobe epilepsy (TLE). • Lower expression of the glucocorticoid receptor (GR) was found in neurons and glia within the hippocampus of patients with TLE, particularly with comorbid depression. • GR expression correlated negatively with the severity of depression score, which was based on psychiatric history. • Lower GR expression in depressed patients with epilepsy is consistent with possible alterations in hypothalamus-pituitary-adrenal (HPA) activity in these patients. • GR expression may be involved in the pathogenesis of comorbid depression in TLE and could constitute a potential biological marker of depression in patients with TLE. | Study design and patient selection Hippocampal samples obtained from patients who underwent surgery for drug-resistant TLE according to the criteria of Kwan et al (2010) 33 were selected within the period from 2006 to 2016. All patients underwent surgery at the epilepsy center of the Ramos Mejía Hospital and/or El Cruce Hospital, Buenos Aires, Argentina. The immunohistochemical procedures were performed at the department of neuropathology of the Amsterdam UMC, The Netherlands. We included samples from patients who had completed the routine psychiatric assessment protocol before surgery and who had signed the approved informed consent for participation. The psychiatric assessment protocol started at the epilepsy center in 2001 as part of a clinical research project and is now considered a routine measure in all patients before epilepsy surgery. 34 All patients were receiving their habitual medication at the moment of surgery and no one had received glucocorticoids anywhere before surgery. Samples included in this study were grouped according to the following criteria for depression. Depression was considered positive when patients had experienced at least one current or past interictal episode of major depression and/or other depressive disorder, according to the Axis I of the fourth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM) IV classification using Structured Clinical Interview for DSM Disorders (SCID) I (dysthymia, major depression with or without psychotic symptoms and/or recurrent depression, bipolar disorder). 35,36 Patients with primary chronic interictal psychiatric disorders according to other sections in Axis I of DSM IV (ie, chronic psychosis, current posttraumatic stress disorder, and severe anxiety disorder) were excluded, as were patients with mental retardation (IQ < 70 and/ or attendance at a special school). The study was approved by the ethics committee of the Ramos Mejía and El Cruce Hospitals, in accordance with the Ethical Standards laid down in the 1964 Declaration of Helsinki, and full informed consent procedures for participation. | Diagnosis of drug-resistant TLE, video- EEG evaluation, and magnetic resonance imaging (See Appendix S1.) | Neuropsychological assessment All patients underwent a neuropsychological assessment before surgery, which was undertaken by trained specialists. Verbal memory was determined using the Rey Auditory Verbal Learning Test (RAVLT), Spanish Version, 37 consisting of reading a list of words in five different trials and then recovering the immediate memory, differed memory, and recognition in each trial. For visual memory the Rey-Osterrieth Complex Figure Test (RCFT) was used. This nonverbal test consists of a visual design that is presented to patients who have to copy and then reproduce immediately after the visual presentation (immediate recall) and after 30 minutes (delayed recall). Because there are no regional normative data for these tests for the Argentinian population, international data were used to compare our results. 38 To measure the cognitive status, z-scores were obtained by comparing each individual result of each test with the normal data corrected by age and sex. 38 According to the high correlation observed between the type of memory and the hippocampal sclerosis laterality, 39 we considered the z-score for visuospatial memory in patients with a right focus, and the z-score for verbal memory in patients with a left focus. | Psychiatric assessment All patients included in this study also underwent a complete psychiatric assessment prior to surgery. Psychiatric assessment was performed by trained psychiatrists according to a standardized protocol especially designed for patients with drug-resistant epilepsy. 6,34 Psychiatric history was obtained from each patient and relatives, complemented by information from families. The psychiatric semiology of the witnessed examination was supplemented with the Structured Clinical Interview (SCID) Spanish version for DSM IV Axis I diagnoses, and with SCID I and SCID II for personality disorders. 35 Diagnosis of depression was based on DSM IV classification and SCID results. In addition, all patients were assessed according to the Global Assessment of Functioning (GAF) of the DSM IV and to the Beck Depression scale. The GAF is a 100-point tool that rates overall psychological, social, and occupational functioning in relation to psychiatric symptoms and is included in the DSM IV in the section on multiaxial assessments (Axis V of DSM IV). 36 The interviews were carried out in approximately 2 to 3 hours. To determine depression severity, the Beck Depression Inventory II (BDI II), Spanish version, was also administered to quantify depression symptoms at the moment of psychiatric assessment. The BDI II was added to the protocol in 2010. 40 Depression severity was also determined using an ad hoc composite score based on psychiatric history, using factors 1-8 for the diagnosis of depression according to the SCID I criteria of DSM-IV. One point was added for each positive factor: 1, the presence of one episode of an affective disorder codified in Axis I of DSM-IV; 2, comorbid psychiatric disorders in Axis I or II, present or past (one point for each comorbid disorder); 3, suicide attempts; 4, psychiatric hospitalization; 5, antidepressant treatment (one point was given for patients who had received antidepressants); 6, GAF ≤60; 7, psychotic symptoms associated to depression; and 8, experienced more than one episode of an affective disorder (ie, major depression and dysthymia or recurrent major depression). | Neuropathological diagnosis and immunohistochemistry Resected hippocampi were fixed in 10% buffered formalin for >1 week and embedded in paraffin. Coronal hippocampal sections at the anterior-medial region of hippocampal body were sectioned at 5 µm, mounted on pre-coated glass slides (Star Frost, Waldemar Knittel, Braunschweig, Germany) and processed. Trained neuropathologists made the neuropathological diagnosis. Archival material of postmortem control hippocampus (postmortem delay was maximum 8 hours) was simultaneously processed. Samples were selected matched by gender and were otherwise free from known neurological injury, drug, and/or alcohol abuse and suicidal events. Immunohistochemistry was performed to study GR protein expression to protocols described before. 13,28 Furthermore, to assess cell type-specific effects, double-labeling was performed using markers for astrocytes as well | Statistical analysis Descriptive statistics was performed, and the chi-square test was used to analyze qualitative variables. The normal distribution of data was determined using the Shapiro-Wilk test. The Student's t test, the one-way analysis of variance (ANOVA), and Pearson correlations were applied when a normal distribution was found and non-parametric tests (Mann Whitney) and Spearman correlations were applied when the data were not normally distributed (Shapiro-Wilk test < 0.05). IBM SPSS Statistics 22 was used to perform statistical analysis. A P-value < 0.05 was assumed to indicate a significant difference. patients with TLE without depression (TLE − D; age = 34± 8 years; 8 men and 6 women), and 9 postmortem controls (age = 52.9 ± 20.6; 6 men and 3 women, cause of death: heart failure = 5, aortic dissection n = 2, pulmonary embolism n = 1, pneumonia n = 1), were included. Postmortem samples were selected matched by gender and free from neurological injury, drug and/or alcohol dependency, and suicidal evidences. The epilepsy duration, the age at epilepsy onset (both P > .05, Student's t test), and seizure frequency (P > .05, Mann-Whitney) did not differ between TLE − D and TLE + D. Clinical and neuropathological data of TLE − D and TLE + D cases are summarized in Table 1. | GR expression in neurons Abundant GR immunoreactivity was present in nuclei of granule cells within the dentate gyrus of controls ( Figures 1A and 2A) and pyramidal cells of CA1 ( Figures 1A and 3A). A predominant nuclear staining was also found within the granule cell layer of TLE − D (Figures 1B and 2B) and TLE + D samples ( Figures 1C and 2C). However, GR immunoreactivity was lower in granule cells in TLE − D as compared to controls, and the lowest expression was found in TLE + D (Figures 1C and 2C). GR immunoreactivity was also lower in CA1 pyramidal cells in TLE − D (Figures 1B and 3B) as compared to controls ( Figures 1A and 3A), and the lowest expression was again found in TLE + D samples (Figures 1C and 3C). | GR expression in glial cells Consistent with earlier reports, 13 GR immunoreactivity was also found in the nuclei of glia cells (white arrows in Figures 2 and 3). Double labeling confirmed that the GR was expressed predominantly in the nucleus and co-localized with the microglia marker CR3/43 and the astrocyte marker GFAP ( Figure 5 | Glucocorticoid receptor expression in relation to clinical parameters Regarding the psychiatric parameters related to depression, the ratio of GR+/NeuN+ cells correlated negatively with depression severity, based on psychiatric history (ad hoc composite score; r = −0.528; P = .006; Spearman correlation), but not with the Beck Inventory (r = −0.359; P = .131; Spearman correlation). A tendency toward a positive correlation was found between the ratio of GR+/NeuN+ cells and the GAF scores (r = 0.297; P = .141; Spearman correlation). In addition, the expression of the GR in glia within the hilus correlated negatively with depression severity (ad hoc composite score; r = −0.42; P = .032; Spearman correlation). With regard to the cognitive variables, memory scores (z-score ipsilateral for the epileptic focus) did not differ between TLE − D and TLE + D cases (P > .05, Student t test) and did not correlate with GR expression in neurons or glial cells (P > .05; Pearson correlation). With respect to the clinical aspects of epilepsy, the epilepsy duration, age at epilepsy onset, and seizure frequency did not correlate with the GR expression in neurons or glial cells (Pearson correlation, P > .05). The distribution of male and female patients in TLE − D and TLE + D groups was similar (P > .05, chi-square test). Regarding sex and GR expression parameters, the expression of the GR in glia was lower in women with TLE-D (x = 82.67, SD = 6.23) as compared to men with TLE-D (x = 105.46, SD = 12.21; t = 4.15, P = .001; Student t test). These differences were not observed in patients with TLE + D (women x = 86.29, SD = 14.60; men x = 76.71, SD = 1.95; t = −1.27; P = .23; Student t test). The other GR expression parameters did not differ between males and females. GR expression did not correlate with age. | DISCUSSION Comorbid depression in TLE occurs frequently among drug-resistant epilepsy patients and strongly affects their quality of life. 7,17 As such, it forms an important concern for both psychiatrists and neurologists. Here, hippocampal samples from a cohort of well-characterized TLE patients were studied, focusing on the GR because it has been proposed to be involved in the pathogenesis of depression. 5,8,9,15,16 We found lower expression of GRs in the hippocampus of TLE patients as compared to controls. Furthermore, TLE patients with comorbid depression had lower GR immunoreactivity in the hippocampus as compared to TLE patients without depression. Watzka et al investigated expression of the GR in the brain of patients with TLE and showed that MR and GR messenger RNA (mRNA) expression was lower in hippocampal tissue than in frontal and temporal lobe cortical tissue of women with epilepsy, 32 but in this study they did not make a comparison to control brain tissue or between patients with and without depression. In another study, higher GR gene expression was found in the cortex of drug-resistant TLE patients as compared to controls. 31 However, in that study the hippocampus was not studied, and the authors did not compare TLE patients with and without depression. Furthermore, they discussed that no clear picture has emerged from GR studies using animal models of epilepsy, since in three studies a decrease in GR was observed in the cortex or hippocampus after induced seizures, whereas the expression increased in the hippocampus in another study. 31 In our previous study (using a different cohort), we have shown lower calbindin expression (in the basal part of the granule cell layer), but higher expression in granule cells that were dispersed in the molecular layer of dentate gyrus, in patients with TLE + D vs TLE − D. 6 We discussed that this pattern of calbindin expression may contribute to both hyperexcitability and neuropsychiatric illness, favoring behavioral and cognitive alterations. Liu et al investigated a large group of epilepsy patients (n = 276) and showed indeed that seizure frequency was positively associated with depression severity. 41 We did not observe such differences between epilepsy patients with and without depression in the present study, which may be explained by our smaller sample size compared to Liu et al; F I G U R E 4 Quantitative analysis of GR expression in the dentate gyrus showed that GR+ area was different between the three groups (controls n = 9, TLE − D n = 14 and TLE + D n = 12; three replicates per case). A smaller GR+ area was found in the dentate gyrus of patients with TLE + D as compared to TLE -D, and as compared to controls. TLE − D, temporal lobe epilepsy without depression; TLE + D, temporal lobe epilepsy with depression. *One-way ANOVA/Bonferroni P < .05. however, there was a tendency for a higher seizure frequency in our group of patients with epilepsy and depression. Our findings about reduced GR expression in the hippocampus, particularly in TLE patients with depression, are comparable to those of other studies on chronic stress and depression models. A downregulation of GR mRNA expression was reported in the rodent hippocampus after chronic stress 21,22,25 and after chronic corticosterone exposure. 23 Mizoguchi et al further found reduced GR expression in the prefrontal cortex after chronic stress 24 and also in primates lower expression of GR mRNA was reported in the prefrontal cortex 42 following stress exposure. Similarly, López et al reported a decrease in hippocampal MR and GR expression in suicide victims with a history of depression. In addition, similar findings were shown by Klok et al 27 for the GR and MR in major depression in various brain regions including the hippocampus, whereas the GR beta isoform that was thought to be implicated in GR resistance was found to be very rare in the human brain. 29 Furthermore, decreased GR mRNA expression in patients with depression was found exclusively in the dentate gyrus. 26 A reduction in the MR/GR ratio has been reported in the anterior hippocampus from patients with major depression. 12 Overall, these results suggest that alterations in hippocampal GR (and MR) expression are associated with depression. These results appear region-and condition-specific, as GR protein level as well as the percentage of GR-containing astrocytes were found before to be significantly higher in the amygdala in major depression than in bipolar depressed patients or in control subjects, but these authors focused on older depressed patients. 28 Furthermore, we showed that the ratio of GR+/NeuN+ cells as well as the expression of the GR in glia within the hilus correlated negatively with the severity of depression as based on psychiatric history (ad hoc composite score), but not with the Beck Inventory, which provides a measure of the severity of depression at the moment of assessment. This indicates the importance of taking into account the psychiatric history and highlights that the most recent status may not always reflect the observed GR changes in the brain. Regarding models of epilepsy, abundant evidence has demonstrated that HPA activity is enhanced during epileptic seizures, 9 but only a few studies identified hippocampal GR expression in epileptogenic brain tissues. In experimental models with rodents, epileptic discharges and ischemic insults were shown to reduce GR expression in the hippocampal neurons of CA1 and the dentate gyrus neurons. 30,43 Sex differences in hippocampal GR expression have also been described. Lower GR expression was found in the epileptogenic cortex of men compared to women; however, women showed lower GR expression in the hippocampus. 32 GR expression is generally not altered during aging, although an age-associated GR decline was reported in the dentate gyrus of females. 13,44 In the current study, we observed a lower glial GR content in women with TLE -D as compared to men, but these differences were not observed in TLE + D patients. No other differences in GR expression were observed between female and male patients. Most likely, the sample size of this study was too small to find further differences. The GR constitutes a key factor in understanding the mechanisms involved in the pathogenesis of TLE and depression, with potential therapeutic implications. Activation of the GR has genomic and, also, rapid nongenomic actions, each of which may affect hippocampal excitability. 16 The MR has a 10-fold higher affinity than the GR for cortisol, so although MR is almost always occupied, the GR becomes activated only when circulating levels of cortisol increase (ie, under stressful conditions) 12,16 and under conditions of epileptic seizures. 9,16 The hippocampus and particularly the dentate gyrus cells further exert an inhibitory role on the activity of the HPA axis. This negative feedback is involved in termination of the stress response. 45 GR dysfunction may contribute to impair the negative feedback of the HPA axis, which, in turn, could lead to a feed-forward activation of the HPA axis. 9 Furthermore, dysregulation of the HPA axis and glucocorticoids may affect local and systemic inflammatory mechanisms, which have been found to be altered in both TLE and depression models. 46 An increase in expression of inflammatory markers has been described extensively in various epilepsy models 47,48 , in epilepsy models with depression, 46 and in depression models. 49 In our current study, patients with TLE had hippocampal sclerosis, which is characterized by neuronal loss and gliosis. In addition, we found lower expression of the GR in glial cells from patients with TLE + D. Thus the reduced GR expression in both neurons and glia may indicate that TLE + D patients have been more exposed to glucocorticoids during life, which may lead to downregulation of GRs. As a result of the prolonged downregulation of hippocampal GR, the inhibitory influence on the HPA axis could have become chronically reduced, thereby stimulating HPA activity even further and creating a vicious cycle. | CONCLUSION Reduced hippocampal GR expression may be involved in the etiology of depression in patients with TLE and could constitute a biological marker of depression in these patients.
Mellin transforms of multivariate rational functions This paper deals with Mellin transforms of rational functions $g/f$ in several variables. We prove that the polar set of such a Mellin transform consists of finitely many families of parallel hyperplanes, with all planes in each such family being integral translates of a specific facial hyperplane of the Newton polytope of the denominator $f$. The Mellin transform is naturally related to the so called coamoeba $\mathcal{A}'_f:=\text{Arg}\,(Z_f)$, where $Z_f$ is the zero locus of $f$ and $\text{Arg}$ denotes the mapping that takes each coordinate to its argument. In fact, each connected component of the complement of the coamoeba $\mathcal{A}'_f$ gives rise to a different Mellin transform. The dependence of the Mellin transform on the coefficients of $f$, and the relation to the theory of $A$-hypergeometric functions is also discussed in the paper. Introduction The Mellin transform M h of a locally integrable function h on the positive real axis is defined by the formula provided the integral converges. Here s is a complex variable s = σ + it. The Mellin transform is closely related to the Fourier-Laplace transform via an exponential change of variables. More precisely, the value of M h (s) is equal to the Fourier-Laplace transform of the function x → h(e −x ) evaluated at the point −is. In this paper we consider Mellin transforms of rational functions h = g/f, where g and f are polynomials. Since the general case is easily settled once we have fully investigated the special case where g ≡ 1, and since this will simplify our notation and therefore clarify our argument, we shall focus mainly on the case h = 1/f . Let us start by considering the one-variable situation. Given a polynomial f (z) = a 0 + a 1 z + . . . + a m z m we assume for the moment that its coefficients a 0 , . . . , a m are positive numbers. Then the integral (1) with h = 1/f converges and defines an analytic function in the vertical strip 0 < σ < m. One can in fact make a meromorphic continuation of this Mellin transform and write it as (2) M 1/f (s) = Φ(s)Γ(s)Γ(m − s) , where Φ is an entire function. To see this, let us first look at the case of a simple fraction 1/(a + bx). In this case one has the explicit formula This means that we obtain formula (2) with Φ(s) = Ψ(s)/ (1 − s) · · · (m − 1 − s) . We have thus found that all the poles of the meromorphic continuation are located at the two integer sequences 0, −1, −2, . . . and m + 1, m + 2, . . . emanating from the end points of the interval [0, m]. Notice that this interval is the Newton polytope of our one-variable polynomial f . As a matter of fact, in the above discussion we did not actually need to assume that the coefficients a 0 , a 1 , . . . , a m be positive. A necessary and sufficient condition for the argument to work, and in particular for the integral to converge, is that f (z) = 0 for all real positive values of z. Another way of formulating this latter condition is that f should have no roots with argument zero. We now turn to the multidimensional case, and we begin looking at a simple example with the denominator f being an affine linear polynomial. Example 1. Consider the polynomial f (z) = 1 + z 1 + z 2 . The Mellin transform of the corresponding rational function 1/f is then given by the integral ∞ 0 ∞ 0 z s1 1 z s2 which after the coordinate change t = vz 1 , u = vz 2 becomes We shall see in this paper that the fact that the poles of the Mellin transform are determined by a product of Γ-functions is not unique for the special cases we have considered so far. In fact, the Mellin transform of a rational function in any number of variables will turn out to be always a product of Γ-functions in linear arguments, multiplied by some entire function, so that the only poles of the Mellin transform are the poles of the Γ-functions. Moreover, the configuration of polar hyperplanes is governed by the Newton polytope of the denominator polynomial, with one family of parallel hyperplanes emanating from each facet of the Newton polytope. Many of the results have been announced previously in [13]. Let us note in passing that a similar phenomenon can be observed also for Mellin transforms of more general meromorphic functions, with transcendental denominators. As an illustration of this we recall the classical formulas and where ζ denotes the Riemann zeta function. Newton polytopes and (co)amoebas Throughout this paper f will denote a complex Laurent polynomial where A ⊂ Z n is a finite subset and C * denotes the punctured complex plane C\{0}. Here we use the standard notation z α = z α1 1 · · · z αn n for z = (z 1 , . . . , z n ) ∈ C n * . The Newton polytope ∆ f of the polynomial f is defined to be the convex hull of A in R n . We shall primarily be interested in the case where ∆ f has a nonempty interior. Like any other polytope, the Newton polytope ∆ f may be alternatively viewed as the intersection of a finite number of halfspaces: where the µ k ∈ Z n are primitive integer vectors in the inward normal direction of the facets of ∆ f , and the ν k ∈ Z are integers. In general we will let Γ denote a face of the Newton polytope of arbitrary dimension, 0 ≤ dim(Γ) ≤ dim(∆ f ), and we define the relative interior relint(Γ) of such a face to be the interior of Γ viewed as a subset of the lowest dimensional hyperplane containing it. For each face Γ we also introduce the corresponding truncated polynomial consisting of those monomials from the original polynomial f whose exponents are contained in the face Γ of the Newton polytope ∆ f . The amoeba A f and the coamoeba A f of a polynomial f are defined to be the images of the zero set Z f = {z ∈ C n * ; f (z) = 0 } under the real and imaginary parts, Log and Arg respectively, of the coordinatewise complex logarithm mapping. More precisely, one has where Log(z) = (log |z 1 |, . . . , log |z n |) and Arg(z) = (arg(z 1 ), . . . , arg(z n )). Writing w = x + iθ ∈ C n and z = Exp(w) = (exp(w 1 ), . . . , exp(w n )), one obtains the identities x = Log(z) = Re (w) and θ = Arg(z) = Im (w), as illustrated in the following picture. The amoeba A f is a subset in R n , whereas the coamoeba A f can be viewed as being located either in the n-dimensional torus (R/2πZ) n or as a multiply periodic subset of R n . This reflects the multivaluedness of the argument mapping. For brevity of notation we denote the amoeba and the coamoeba of a truncated polynomial f Γ by A Γ and A Γ . Mellin transforms of rational functions The natural generalization to several variables of the standard Mellin transform of a rational function 1/f is given by the integral where R n + = (0, ∞) n denotes the positive orthant in R n . In order for such an integral to converge one has to make some assumptions about the exponent vector s and also about the denominator f . It turns out that it is not enough to demand only that f be non-vanishing on R n + . Definition 1. A polynomial f is said to be completely non-vanishing on a set X if for all faces Γ of the Newton polytope ∆ f the truncated polynomial f Γ has no zeros on X. In particular, the polynomial f itself does not vanish on X. Remark. This concept of completely non-vanishing polynomials is closely related to the notion of quasielliptic polynomials discussed in [5]. If the polynomial f is completely non-vanishing on the positive orthant R n + then the integral (5) converges and defines an analytic function in the tube domain s ∈ C n ; Re s = σ ∈ int ∆ f . Proof. It will suffice to prove that for any given s with σ ∈ int ∆ f there are positive constants c, k > 0 such that The proof is by induction on the dimension n. The case n = 1 is easy. Let α and β with α < β be the two endpoints of ∆ f . Then for sufficiently large negative x one has f (e x ) e −σ·x ≥ 1 2 |a α | e (σ−α)|x| , and for sufficiently large positive x Now make the induction hypothesis that the inequality (6) holds for dimensions ≤ n − 1, and consider a polynomial f of n variables. For each face Γ of ∆ f , with 0 ≤ dim Γ ≤ n − 1, the given point σ can be expressed as a convex combination where σ Γ ∈ relint(Γ) and τ Γ ∈ relint conv(A \ Γ) . Fix a choice of such a point σ Γ in each face Γ, and consider for each Γ the new convex polytope Notice that when dim Γ = 0, that is, when Γ is a vertex of ∆ f , one has ∆ Γ = ∆ f . Notice also that the original point σ belongs to each ∆ Γ . Let C Γ be the outer normal cone to ∆ Γ with vertex at σ Γ : All these cones C Γ are of full dimension n and together they almost cover the entire space R n . More precisely, the complement is a bounded subset of R n . Then one can let C Γ be a slightly smaller closed convex cone, still with vertex at σ Γ , such that C Γ \ σ Γ is contained in the interior of C Γ , and such that the complement of the union ∪ Γ C Γ is still a bounded set. Notice that for x ∈ C Γ \ σ Γ the inequality in (7) will be strict, and we may in fact assume this to be true uniformly. We now observe that it is enough to prove the estimate (6) for x ∈ C Γ . Actually, it suffices to do it for x ∈ C Γ \ B R (0) for some large ball B R (0). From the induction hypothesis we conclude that there are constants c Γ such that Indeed, f Γ (e x ) is a function depending on fewer variables than n, since it is homogeneous in directions orthogonal to Γ, and σ Γ ∈ relint(∆ fΓ ). For each face Γ let g Γ (z) be the function containing all the monomials not on Γ so that f Γ + g Γ = f . Now we use the decomposition f = f Γ + g Γ so that one obtains Take x ∈ C Γ and write x = σ Γ + y. Recall that σ ∈ ∆ f . The first factor e σΓ−σ,x can be estimated from below by c 0 e k|y| with the positive constants c 0 and k given by c 0 = exp σ Γ − σ, σ Γ , and Assuming, which we may, that |x| > |σ Γ |, and hence that |x| − |σ Γ | ≥ |x − σ Γ | = y, we find e σΓ−σ,x ≥ c 1 e k|x| , where c 1 = c 0 e −k|σΓ| . To finish the proof of the inequality we now only need to bound the expression in brackets in (8) from below by a positive constant. From the induction hypothesis we have that |f Γ e − σΓ,x | ≥ c Γ > 0, and it is therefore enough to show that the remainder term g Γ (e x )e − σΓ,x stays small, say < c Γ /2. We have the identity g Γ (e x ) = α∈A\Γ a α e α,x α∈A\Γã α e α,y . Since α ∈ ∆ Γ we have a strictly positive constant and hence a α e α,x = ã α e σΓ−α,y ≤ |ã α | e −kα |y| . This means that for some large enough R 0 one has Hence there is an inequality |f (e x ) exp (− σ Γ , x )| ≥ c Γ /2, and we can conclude that for all x in C Γ \ B R (0), for some large ball B R (0), one has the desired estimate Having thus established the convergence of the integral (5) defining the Mellin transform, we now turn to the question of finding its analytic continuation as a meromorphic function of s in the whole complex space C n . The polar locus of the meromorphic continuation turns out to be a finite union of families of parallel hyperplanes. The normal directions of these hyperplanes are precisely the vectors µ k from the representation (4) of the Newton polytope ∆ f . Theorem 2. If the polynomial f is completely non-vanishing on the positive orthant R n + and its Newton polytope ∆ f is of full dimension, then the Mellin transform M 1/f admits a meromorphic continuation of the form where Φ is an entire function, and where µ k , ν k are the same as in equation (4). Before giving the proof of this theorem let us illustrate the idea of the argument by means of a specific example. It is easy to check that the representation (4) of its Newton polytope is given by so in this case the Newton polygon ∆ f has four inward normal vectors given by , and µ 4 = (0, 1) . We know from Theorem 1 that the Mellin transform M 1/f is holomorphic for all s = (s 1 , s 2 ) whose real part σ = (σ 1 , σ 2 ) lies inside the Newton polygon ∆ f . In order to achieve a meromorphic continuation of M 1/f across the left vertical edge of ∆ f it suffices to perform an integration by parts with respect to z 1 . Indeed, this gives us the identity and we claim that this integral, that is, the Mellin transform multiplied by s 1 , converges for all s with real part σ in the dark triangle on the left in Figure 2. This means that M 1/f has been continued meromorphically over the hyperplane s 1 = 0 as desired. To verify the claim we decompose the integral in (10) into two Mellin type integrals containing the integrands 2z 2+s1 1 z s2 2 /f 2 and z 1+s1 1 z 2+s2 2 /f 2 respectively. Since the Newton polygon of the denominator f 2 is equal to the original ∆ f dilated by a factor 2, we see that the convergence domains for these two integrals are given by the translated polygons (−2, 0)+2∆ f and (−1, −2)+2∆ f respectively. The sum of the integrals therefore converges on the intersection of the translated polygons, and this is precisely the dark triangle on the left in Figure 2. We have thus seen how a meromorphic continuation can be carried out in the horizontal direction, that is, in the direction given by µ 1 . Suppose next that we wish to obtain a similar mermorphic extension across the upper left edge of ∆ f , the one with normal vector µ 2 = (1, −1). The way to acheive such a "directional integration by parts" is to suitably introduce a parameter λ and then to differentate with respect to λ. More precisely, we make the coordinate change z 1 → λz 1 , z 2 → λ −1 z 2 and obtain Here the left hand side is obviously independent of λ. Hence so is the right hand side, and after differentiating and plugging in λ = 1 we find that This relation can be re-written as and reasoning as above we find that this latter integral converges for all s with real part σ in the dark polygon on the right in Figure 2, thereby yielding a meromorphic continuation across the hyperplane s 1 − s 2 = −1. This method of repeatedly performing integration by parts in all the directions µ k , by using the corresponding coordinate changes z j → λ µ kj z j , is the basis for our proof of Theorem 2, and it gives a global meromorphic continuation of the original Mellin integral. For our special example, the picture below indicates the full set of polar hyperplanes, going out in all directions from the Newton polytope ∆ f . Remark. For the Mellin transform of a general rational function g/f each monomial in the numerator g produces an integral similar to the one in the theorem, except that we get a shift in the variable s by an integer vector. This corresponds to a translation of the Newton polytope of f , and hence also of the domain of convergence of that particular integral. If g has several monomials it can very well happen that the intersection of all the corresponding shifted polytopes is empty. In that case the integral defining the Mellin tranform may not actually converge for any values of s. Nevertheless, performing the meromorphic continuation of each of the integrals associated with the monomials from g and then summing these meromorphic functions, we still obtain a natural interpretation of the Mellin transform M g/f as a meromorphic fucntion in the entire s-space. Proof. We prove that the integral (5) can be re-written in such a way as to make it have a larger convergence domain, at the expense of having to multiply the integral with reciprocals of linear terms corresponding to the poles of the gamma functions. In order to achieve this we shall repeatedly "integrate by parts" in each of the directions given by the vectors µ k . Each such step consists in first making the corresponding dilation (z 1 , . . . , z n ) → (λ µ k1 z 1 , . . . , λ µ kn z n ) of the coordinates, then differentiating with respect to the dilation parameter λ, and finally setting λ equal to 1. Note that, if Γ is the facet of ∆ f with inward normal vector µ k , the truncated polynomial f Γ has the homogeneity f Γ (λ µ k z) = λ ν k f Γ (z). Hence, the scaled polynomial λ −ν k f (λ µ k z) has the property that all its monomials with exponents from Γ have coefficients that are independent of the parameter λ. This means that in the differentiated polynomial there are no monomials with exponents from the facet Γ. Its Newton polytope is therefore strictly smaller than ∆ f , with the integer ν k from the original inequality µ k , σ ≥ ν k being replaced by ν k + 1, or possibly by an even larger integer. Starting from the original integral expression (5) for the Mellin transform M 1/f , introducing the parameter λ, and keeping in mind that M 1/f itself is of course independent of λ, we obtain which upon performing the differentation and setting λ = 1 yields the identity As we shall iterate this procedure it will be important to keep track of polytopes of different sizes, and to this end we introduce, for any vector γ ∈ Z n , the notation In particular, we have ∆ f = ∆(ν). Now let m ∈ N N be a given vector, and perform the integration by parts m j times in the direction of µ j , for each j = 1, . . . , N . The total number of such integrations will thus be |m| = m 1 + . . . + m N . We claim that this iterative process leads to an expression for the Mellin transform that is of the form where g m is a polynomial whose Newton polytope satisfies ∆ gm ⊆ ∆(|m|ν + m) and u j (s) = mj −1 =0 µ j , s − ν j + , with the convention u j = 1 if m j = 0. The proof of the claim is by induction. First we check that it holds true in the case |m| = 1, that is, when m is a standard unit vector e k with 1 in the k'th entry and zeros elsewhere. Indeed, this is precisely the content of formula (11), where we recall that the Newton polytope of g e k is contained in ∆(ν + e k ). Assume now the claim to be true for some given vector m, and let us show that it then holds also for m = m + e k , where e k is a unit vector as before. Introducing again the dilated coordinates λ µ k z, we can re-write the integral in equation (12) as We should then differentiate this expression with respect to λ and put λ = 1. When the derivative falls on the monomial in front of the integral we get a factor µ k , s − ν k + m k which is precisely what needs to be incorporated into the function u k , and when we differentiate under the sign of integration we arrive at an expression of the form The new polynomial in the numerator is g m To finish the proof of the claim we must show that ∆ g m ⊆ ∆(|m |ν + m ). We shall use the fact that the Newton polytope of a product of two polynomials is equal to the (Minkowski) sum of their Newton polytopes, and also the obvious general inclusion ∆(γ) + ∆(δ) ⊆ ∆(γ + δ). Recalling the induction hypothesis, we first see that the Newton polytope of the product g e k g m is contained in the polytope ∆(ν +e k )+∆(|m|ν +m) ⊆ ∆((1+|m|)ν +m+e k ) = ∆(|m |ν +m ). Then, since the polynomialg m has no monomials with exponents on the plane µ k , σ = |m|ν k +m k , we similarly get that the Newton polytope of the other term fg m is contained in ∆(ν) + ∆(|m|ν + m + e k ) ⊆ ∆(|m |ν + m ). From this the claim follows, that is, the Mellin transform is given by (12) with g m satisfying ∆ gm ⊆ ∆(|m|ν + m). Our next step is to prove that the integral in (12) converges and defines an analytic function for all s with real parts σ in the enlarged polytope ∆(ν − m). By considering separately each term of g m , we can infer from Theorem 1 that the domain of convergence will contain (the interior of) the intersection of translates of dilated copies of ∆ f . Let us check that ∆(ν − m) is indeed a subset of (13). Take an arbitrary σ 0 ∈ ∆(ν − m). By definition it satisfies the inequalities (14) µ k , σ 0 ≥ ν k − m k , k = 1, . . . , N . What we have to show is that σ 0 satisfies these inequalities. In view of the inclusion ∆ gm ⊆ ∆(|m|ν + m), we have µ k , τ ≥ |m|ν k + m k for all k. Together with (14) this gives so σ 0 does indeed satisfy (15), and since τ was arbitrary it follows that σ 0 lies in the intersection (13). In the interior of the domain ∆(ν − m) + i R n the only poles of M 1/f are given by u j (s) = 0, j = 1, . . . , N . All these poles are simple. This is the same polar locus as for the product k Γ( µ k , s − ν k ). By the theorem on removable singularities it follows that the quotient M 1/f / k Γ( µ k , s − ν k ) = Φ is holomorphic for σ inside the polytope ∆(ν − m). But here m ∈ N N is arbitrary, and since the union of all the ∆(ν − m) is the entire space R n , we conclude that Φ is in fact an entire function as claimed in the theorem. Two special cases In certain situations we are able to make our description of the Mellin transform even more precise, and explicitly compute the entire function Φ that occurs in front of the gamma factors in Theorem 2. We have already encountered such a case in Example 1 of the introduction, where we considered the transform of the simple fraction 1/(1 + z 1 + z 2 ). Elaborating this example just a little further, and considering a more general linear fraction 1/(c 0 + c 1 z 1 + . . . + c n z n ) with each coefficient c k being a positive real number, one easily deduces the formula Proposition 1. Assume that the polynomial f (z) = m k=0 1 + a k , z is a product of affine linear factors, with each a k ∈ R n + . Then the Mellin transform of the rational function 1/f is equal to with the entire function Φ given by Here σ m denotes the standard m-simplex τ ∈ R m + ; τ k < 1 , and the α k (τ ) are affine linear forms defined by Proof. We begin by first computing the Mellin transform of a power of the type 1/(1 + c, z ) m+1 . By performing repeated integrations under the sign of integration we get Then, recalling the formula (16) and using the simple identity we find that Next we make use of the generalized partial fractions decomposition which occurs in the theory of analytic functionals and Fantappiè transforms, see for instance [1] or [11]. From this formula we immediately obtain which yields (17). Here one may remark a close connection to the classical Euler beta function B. Namely, if we let the coefficients a 02 and a 11 become zero, we are left with Φ(s 1 , s 2 ) = a −s1 01 a −s2 we see that the function Φ is no longer entire. This is to be expected however, because the new polynomial f (z 1 , z 2 ) = (1 + a 01 z 1 )(1 + a 12 z 2 ) has a different Newton polygon, and the new Φ should contribute to the change of Γ-factors in the Mellin transform. In fact, when a 02 = a 11 = 0 we have the formula It is not always the case that all the polar hyperplanes of the gamma functions in the representation (9) are actual singularities for the Mellin transform M 1/f . It may happen that the entire function Φ has zeros that cancel out some of the poles. A very simple example of this phenomenon is provided by the function f (z) = 1+z m with m ≥ 2. In this case the substitution z m = w leads to the formula so the polar locus is just mZ. In fact, the entire function Φ from (9) is given by and it has plenty of integer zeros. A slight generalization of this example is provided by the following result. Proof. We make the monomial change of variables z αj1 1 · · · z αjn n = w j , so that z j = w βj1 1 · · · w βjn n and dz/z = δ −1 dw/w. The Mellin transform can then be written and the latter integral is of a similar form as the one in Example 1. Mellin transforms and coamoebas Let us return for a moment to the one-variable Mellin transform where we assume, as before, that the polynomial f does not vanish on the positive real axis and that the real part of s lies in the interior of the Newton interval ∆ f . Our first claim is now that the value of the above integral remains unchanged if the set of integration is rotated slightly. In other words, for |θ| small enough one has the identity To verify this, we perform an integration along a closed path starting at the origin, then running along the positive real axis to the point R, continuing along the circle |z| = R to the point Re iθ , and then going straight back to the origin, see Figure 4 below. Since θ is close to zero, the denominator f has no zeros in the closed sector with arguments between 0 and θ. By the residue theorem the integral over the closed contour is therefore equal to zero, and since the integrand decreases fast when |z| → ∞, the integral over the circular arc C R can be made arbitrarily small by choosing R large enough. The integrals along the two infinite rays are thus equal as claimed. θ C R R Figure 4. The contour of integration in the residue computation. From the above argument we see that the directional Mellin transform coincides with the standard one as long as the two directions θ and 0 belong to the same connected component of the coamoeba complement R \ A f . Furthermore, it is clear that the Mellin integral over Arg −1 (θ) converges for every choice of θ outside the coamoeba A f . A similar residue computation as above then again shows that the directional Mellin transform only depends on which connected component of R \ A f it is that contains θ. Turning to the general case n ≥ 1, there are two important differences to be observed. On the one hand we recall the condition in Theorem 1 that the polynomial f should be completely non-vanishing on R n + = Arg −1 (0) in order for the integral to converge, and on the other hand we note that the coamoeba A f is in general not a closed set. The following result connects these two facts, and it allows us to define the directional Mellin transform for each argument θ that does not belong to the closure A f . Theorem 3. For any θ ∈ R n \ A f the polynomial f is completely non-vanishing on the set Arg −1 (θ). Proof. For any given argument vector θ we can consider the new polynomial f θ (z) = f (e iθ1 z 1 , . . . , e iθn z n ). Observe that A f θ + θ = A f , so that 0 ∈ A f θ if and only if θ ∈ A f , and also that f θ is completely non-vanishing on Arg −1 (0) if and only if f is completely non-vanishing on Arg −1 (θ). This means that it actually suffices to prove the theorem for the special case θ = 0. Assume then that f is not completely non-vanishing on the set Arg −1 (0), so that for some face Γ one has 0 ∈ A Γ . In other words, there is an x 0 ∈ R n such that f Γ (e x0 ) = 0. We must show that 0 belongs to the closure A f . This is obvious if Γ = ∆ f , so we can assume that dim Γ ≤ n − 1. Choose a vector µ ∈ Z n and an integer ν ∈ Z such that µ, α = ν for α ∈ Γ and µ, α < ν for α ∈ ∆ f \ Γ. Writing where b α = a α e x0,α and c α = µ, α −ν > 0. Now let ε > 0 be given. Choose a disk D ε of radius ε centered at x 0 and contained in a complex line on which the function w → f Γ (e w ) does not vanish identically. Then translate this disk along the real space, so that D ε − tµ is a disk centered at the point x 0 − tµ for some large positive number t. Since f Γ (e w ) is non-zero on the boundary of D ε we have |f Γ (e w )| ≥ δ > 0 for w ∈ ∂D ε . This means that |f Γ (e w )| ≥ δ e −tν on the translated circle ∂D ε − tµ. Taking t large enough, we also have |g Γ (e w )/f Γ (e w )| < 1 on ∂D ε − tµ, that is, |g Γ (e w )| < |f Γ (e w )|. Rouché's theorem then tells us that f (e w ) = f Γ (e w ) + g Γ (e w ) has a zero w ε in the disk D ε −tµ. So z ε = e wε belongs to the hypersurface f (z) = 0. But we also know that |Arg(z ε )| = |Im w ε | < ε, and since ε was chosen arbitrary we conclude that 0 ∈ A f . Remark. In the above proof we showed that all the facial coamoebas A Γ are contained in the closure A f of the main coamoeba. It is a fact, proved by Johansson [12] and independently by Nisse and Sottile [14], that one actually has an equality Using Theorems 1 and 3 we can now define a directional Mellin transform (18) for any θ in the complement R n \ A f . Just as in the one-variable case discussed earlier in the section, the various Mellin transforms will in fact be equal for all θ that belong to the same connected component of R n \ A f . This can be seen by connecting two different values of θ through a polygonal path such that along each edge of the path only one component θ k is being changed. The invariance of the Mellin transform under such a move is then a consequence of the one-variable argument. ln order to put our next theorem in a proper perspective, it seems appropriate at this juncture to recall some known facts about amoebas and Laurents series of rational functions. A reference for these results is [6]. Associated with each connected component E of the amoeba complement R n \ A f is a Laurent series representation 1 f (z) = α∈Z n c E α z −α of the rational function 1/f . The coefficients of the series are given by the integrals where x is any point in the connected component E. Each such Laurent series will converge in the corresponding Reinhardt domain Log −1 (E). We stress the fact that the amoeba A f is always a closed set, so in contrast to the case of coamoebas, there is no need to take the closure of the amoeba. The following result about coamoebas and Mellin transforms provides a practically perfect analogy to the above picture for amoebas and Laurent coefficients. which converges for all z in the domain Arg −1 (E). Here σ is an arbitrary point in int ∆ f and with θ being an arbitrary point in the component E. Proof. From Theorem 3 and (an obvious generalization of) Theorem 1 we see that the integral (20) converges, and from the discussion preceding Theorem 3 we also know that the value of (20) is independent of the particular choice of point θ ∈ E. In order to prove the identity (19) it suffices to verify that, for all s = σ + it such that σ ∈ int(∆ f ), the function x → e s,x+iθ /f (e x+iθ ) is in the Schwartz space S(R n ) of rapidly decreasing functions. Then the result follows from well known facts about inversion of Fourier transforms, see Thm 7.1.5 in [10]. For simplicity, and without loss of generality, we assume that θ = 0. We have |e s,x /f (e x )| = e σ,x /|f (e x )|, and from the inequality (6), which we established in the proof of Theorem 1, we see that e s,x /f (e x ) is an exponentially dercreasing function. It remains to verify that all its partial derivatives have the same property. Computing a typical derivative, we get where f k denotes the derivative of the polynomial f with respect to z k . Here the first term one the right hand side is just a constant times the original function, and the second term is of the form α∈A α k a α e σ+α,x f (e x ) 2 . The Newton polytope of the denominator is ∆ f 2 = 2∆ f , so σ+α ∈ int ∆ f 2 for every α ∈ A, and hence each term in the sum satisfies the conditions of Theorem 1. This means that the derivative (21) is a finite sum of functions to which we can apply Theorem 1 and the inequality (6). By induction this implies that all derivatives of e s,x /f (e x ) decrease exponentially. Remark. It is clear that if E is a connected component of R n \A f that is obtained by just translating another component E by 2πe k , then the corresponding two Mellin transforms are related by the simple formula In general however, the relations between the various Mellin transforms associated with different connected components are rather complicated. Furthermore, it is worth mentioning that all the connected components of R n \ A f are convex sets. This fact follows for instance from the Bochner tube theorem, see [4]. Finally, we point out that Theorem 4 can also be proved by using results from Antipova [2]. Hypergeometry In this final section we shall consider the dependence of the Mellin transform, and in particular of the entire function Φ, on the coefficients a = {a α } of the polynomial f . In order to emphasize this dependence we are here going to write Φ(a, s) rather than just Φ(s). The crucial observation will be that, with respect to the variables a, the function Φ is an A-hypergeometric function in the sense of Gelfand, Kapranov and Zelevinsky. More precisely, a → Φ(a, s) satisfies the Ahypergeometric system of partial differential equations with homogeneity parameter β = (−1, −s 1 , −s 2 , . . . , −s n ). Let us recall the structure of the A-hypergeometric system. Our starting point is the subset A ⊂ Z n of exponent vectors occurring in the expression (3) for the polynomial f . We introduce a numbering α 1 , . . . , α N of the elements of A, with each α k = (α 1k , . . . , α nk ) ∈ Z n . Abusing the notation slightly, we write A also for the (1 + n) × N -matrix whose column vectors are (1, α k ). For any vector v ∈ Z n we denote by v + and v − the vectors obtained from v by replacing each component v k by max(v k , 0) and max(−v k , 0) respectively, so that v = v + − v − . Definition 2. Let A denote a subset {α 1 , . . . , α N } ⊂ Z n and the associated (1 + n) × N -matrix as above. The A-hypergeometric system of differential equations with homogeneity parameter β ∈ C n is then given by b F (a) = 0, b ∈ Z N , Ab = 0, and E β j F (a) = 0, j = 0, 1, . . . , n , where the differential operators b and E β j are given by An analytic function F that solves the system is called A-hypergeometric with homogeneity parameter β. Remark. We are assuming N ≥ 1 + n, and as soon as this inequality is strict there are of course infinitely many vectors b satisfying Ab = 0, but it is a known fact, see [15], that the system is in fact determined by a finite number of operators b . Let us now, for a given choice of coefficients a, consider an entire function s → Φ(a, s) as described in Theorems 2 and 4. We want to study what happens when we start varying a. Recall from [7] and [8] the notion of the principal Adeterminant E A , also known as the full A-discriminant. It is a polynomial in the variables a, with the property that its zero set Σ A ⊂ C N contains the singular locus of all A-hypergeometric functions. Theorem 5. Take a ∈ C N \ Σ A and let E be a connected component of R n \ A f , with f being the polynomial f (z) = a 1 z α1 + . . . + a N z α N . Also take s ∈ C n with Re s ∈ int ∆ f . Then the analytic germ has a (multivalued) analytic continuation to (C N \ Σ A ) × C n which is everywhere A-hypergeometric in a with varying homogeneity parameter (−1, −s 1 , . . . , −s n ). Proof. First of all it is clear that θ will be disjoint from A f also for polynomials f with coefficients a k near the original ones, say in a small ball B(a), so that the integral does indeed define an analytic germ Φ(a, s). From (a straightforward generalization of) our Theorem 2 we also know that Φ is extendable as an entire function with respect to the variables s. In other words, we already have an analytic extension of Φ to the infinite cylinder B(a) × C n . Let us next verify that Φ is an A-hypergeometric function with the correct homogeneity parameter. When doing this we first fix s at an arbitrary value with Re s ∈ int ∆ f , hence in particular away from the polar hyperplanes of the gamma functions. Then the function in front of the integral is just a non-zero constant and we can deal directly with the integral, by differentiation under the integral sign. Notice that the condition that Ab = 0 amounts to the two identities |b + | = |b − | and b + , α = b − , α , were we have used the shorthand notation |b ± | = b ± k and b ± , α = b ± k α k . Computing iterated derivatives of the integrand 1/f in the Mellin integral we get and since here the right hand side is independent of the choice of sign in b ± , so is the left hand side. This means that b (1/f ) = 0, and hence we also have b Φ = 0. It is obvious that Φ is homogeneous of degree −1 with respect to the variables a k . To check the other homogeneities one can integrate by parts in the integral. As in our proof of Theorem 2 this can be efficiently done by dilating the variables by means of a parameter λ. For example, making the dilation z j → λz j we get Differentiating both sides of this identity with respect to λ and then putting λ = 1, we find that and hence E β j Φ(a, s) = 0, with β j = −s j as claimed. We have thus established that Φ is an A-hypergeometric analytic function in the product domain B(a) × (int ∆ f + i R n ), and by uniqueness of analytic continuation its extension to the cylinder B(a) × C n will remain A-hypergeometric. Next, by the general theory of A-hypergeometric functions one has, for each fixed s, a (typically multivalued) analytic continuation of a → Φ(a, s) from B(a) to all of C N \ Σ A . Well known results on analytic functions of several variables then tell us that these continuations will still depend analytically on s, so we have achieved the desired analytic continuation to the full product domain (C N \ Σ A ) × C n . The uniqueness of analytic continuation again guarantees that Φ will everywhere satisfy the Ahypergeometric system with the homogeneity parameter (−1, −s 1 . . . . , −s n ). Related integral representations of A-hypergeometric functions have been considered by several authors, see for instance [9] and [3]. It is probably instructive to examine a concrete special instance of the above theorem, and we choose to present the case of the classical Gauss hypergeometric function. Example 3. Take A = {(0, 0), (1, 0), (0, 1), (1, 1)} to consist of the four corners of the unit square in the first quadrant. It is easy to check that in this case Σ A is given by the equation E A (a) = a 1 a 2 a 3 a 4 (a 1 a 4 − a 2 a 3 ) = 0. Then consider the polynomial f (z) = a 1 + a 2 z 1 + a 2 z 2 + a 4 z 1 z 2 together with its associated Mellin transform Let us compute this transform for simplicity first in the case a 1 = a 2 = a 3 = 1. Writing f as (1 + z 2 ) + (1 + a 4 z 2 )z 1 , we can use formula (16) and first perform the integration with respect to z 1 . This yields the expression Γ(s 1 )Γ(1 − s 1 ) Notice that the three singular points 0, 1 and ∞ for the Gauss function correspond to the factors a 1 a 4 − a 2 a 3 , a 1 a 4 and a 2 a 3 of the principal A-determinant E A .
Expectations for methodology and translation of animal research: a survey of health care workers Background Health care workers (HCW) often perform, promote, and advocate use of public funds for animal research (AR); therefore, an awareness of the empirical costs and benefits of animal research is an important issue for HCW. We aim to determine what health-care-workers consider should be acceptable standards of AR methodology and translation rate to humans. Methods After development and validation, an e-mail survey was sent to all pediatricians and pediatric intensive care unit nurses and respiratory-therapists (RTs) affiliated with a Canadian University. We presented questions about demographics, methodology of AR, and expectations from AR. Responses of pediatricians and nurses/RTs were compared using Chi-square, with P < .05 considered significant. Results Response rate was 44/114(39%) (pediatricians), and 69/120 (58%) (nurses/RTs). Asked about methodological quality, most respondents expect that: AR is done to high quality; costs and difficulty are not acceptable justifications for low quality; findings should be reproducible between laboratories and strains of the same species; and guidelines for AR funded with public money should be consistent with these expectations. Asked about benefits of AR, most thought that there are sometimes/often large benefits to humans from AR, and disagreed that “AR rarely produces benefit to humans.” Asked about expectations of translation to humans (of toxicity, carcinogenicity, teratogenicity, and treatment findings), most: expect translation >40% of the time; thought that misleading AR results should occur <21% of the time; and that if translation was to occur <20% of the time, they would be less supportive of AR. There were few differences between pediatricians and nurses/RTs. Conclusions HCW have high expectations for the methodological quality of, and the translation rate to humans of findings from AR. These expectations are higher than the empirical data show having been achieved. Unless these areas of AR significantly improve, HCW support of AR may be tenuous. Background Biomedical animal research (AR) involves some harm to sentient animals including distress (due to confinement, boredom, isolation, and fear), pain, and early death [1][2][3]. AR is said to be morally permissible because the balance of these costs (harms to the animals) and benefits (to human medical care, quality of life, and survival) is favorable [4]. It is generally assumed that the benefits are great to human medicine [5]. An awareness of the empirical costs and benefits of AR is an important issue in medicine for several reasons. Health care workers (HCW) often perform (and are expected to perform) AR, promote AR directly with trainees and indirectly as role models, and advocate for use of public funds (from granting agencies and charitable foundations) toward medical related AR. Since most AR is funded by public money through government and charitable granting agencies, it is important to know the public perception of, and the level of public support for AR. Surveys of the public find that the majority are 'conditional acceptors' of AR; they accept the practice because of the promise of cures and treatments for life-threatening and debilitating human diseases, so long as animal welfare is at least minimally considered and protected [41]. To our knowledge, no survey has asked for the details of this conditional acceptance of AR. In this survey we ask HCW directly what the minimal acceptable standards in AR methodology might be, and what the minimal acceptable translation rate of AR to human treatments might be. This is important in order to determine how strong the support is for the empirical practice of AR, and how AR could be improved to increase the level of support. We found that HCW have high expectations for the methodological quality of, and the translation rate to humans of findings from AR. Questionnaire administration All pediatricians and pediatric intensive care unit nurses and respiratory therapists (RTs) who are affiliated with one Canadian University were e-mailed the survey using an electronic, secure, survey distribution and collection system (REDCap, Research Electronic Data Capture) [42]. A cover letter stated that "we very much value your opinion on this important issue" and that the survey was anonymous and voluntary. We offered the incentive that if the response rate was at least 70% we would donate $1000 to the Against Malaria Foundation or the PICU Social Committee. Non-responders were sent the survey by e-mail at 3-week intervals for 3 additional mailings. Questionnaire development We followed published recommendations [43]. To generate the items for the questionnaire, we searched Medline from 1980 to 2012 for articles about the methodology and translation of AR. This was followed by collaborative creation of the background section and questions for the survey by the authors. Content and construct validation were done using a table of specifications filled out by experts including two ethics philosophy professors, and two pediatricians. Face and content validation were done by pilot testing of the survey, by non-medical, universityeducated lay people (n = 9), pediatricians (n = 2), pediatric intensive care nurses (n = 2), and an ethics professor (n = 1). Each pilot test was followed by a semistructured interview by 1 of the authors to ensure clarity, realism, validity, and ease of completion. A published clinical sensibility tool was used for the expert and pilot testing [43]. After minor modifications, the survey was approved by all the authors. Questionnaire content The background section stated: "In this survey, 'animals' means: mammals, such as mice, rats, dogs, and cats. It has been estimated that over 100 million animals are used in the world for research each year. There are many good reasons to justify AR, which is the topic of this survey. Nevertheless, some people argue that these animals are harmed in experimentation, because their welfare is worsened. In this survey, 'harmful' means such things as: pain, suffering (disease/injury, boredom, fear, confinement), and early death. This survey is about how AR should be performed. We value your opinion on the very important issue of the methodology of AR." We presented demographic questions, 15 questions that asked respondents "about the methods of AR that are commonly discussed by animal researchers", 4 questions that asked the respondent "to consider what you think the benefits to humans are as a result of AR", and 8 questions that asked the respondent "for your opinions about what you expect from AR paid for with public funds (for example, funding by government using tax dollars, or charitable foundations using donations)." Response choices included scales of "strongly agree, agree, undecided, disagree, strongly disagree", "nearly always, often, sometimes, not often, almost never", and "5-20%, 21-40%, 41-60%, 61-80%, over 80%" depending on the type of question. All the questions are shown in the Tables 1, 2, 3, and 4. Ethics approval The study was approved by the health research ethics board 2 of our university (study ID Pro00039590) and return of a survey was considered consent to participate. Statistics The web-based tool (REDCap) allows anonymous survey responses to be collected, and later downloaded into an SPSS database for analysis. The proportions of respondents with different answers were expressed as percentages. The responses of the two predefined groups, pediatricians and pediatric intensive care unit nurses/ RTs, were compared using the Chi-square statistic, with P ≤ .05 after Bonferroni correction for multiple comparisons considered significant. Pediatricians Demographics Forty-eight responded, but only 44/114 (39%) gave responses to more than the demographic questions. Demographics are given in Table 1. Expectations regarding methodology of AR The majority of respondents agreed that: anesthetic use should be monitored during surgery (100%), pain should be monitored after this surgery even over-night (91%), and experimenters in a research study should have similar training on the procedures involved (97%) ( Table 2). The majority disagreed that it is acceptable: to use less humane methods of euthanasia to reduce costs or improve results (82% or 52% respectively), to use animals when alternatives are available (73%), to do an animal experiment without a systematic literature review (100%), and to do an animal experiment using suboptimal methods (including randomization, blinding, and primary outcome specification) in order to save costs (82-93%). Only a minority of respondents agreed that failed animal models of a disease should continue to be used (30%), or that stressed animals should be used (37%). Finally, the majority agreed that guidelines consistent with these responses should be required for publicly funded AR (95%). Perceptions of human benefits from AR Most respondents believe that discoveries from AR sometimes or often lead to a treatment for human disease directly (77%) or indirectly (84%), and that researchers sometimes or often claim large benefits from AR (91%) ( Table 3). The majority did not agree (84%) with the statement that "AR rarely produces benefits to humans." Expectations for translation to humans from AR paid for with public funding The majority of respondents think that drugs tested on animals should correctly predict the following for humans at least 41% of the time: adverse reactions (69% of respondents), disease treatment (62% of respondents), carcinogenicity or teratogenicity (74% of respondents), and treatment of stroke, severe infection, cancer, brain or spinal cord injury (59% of respondents). The majority also expected that replication of AR findings in second laboratories or other strains of the animal should occur at least 61% of the time (95% and 68% of respondents respectively). The majority agreed that misleading (in terms of human benefit and/or harm) animal experiments should occur at most 40% of the time (86% of respondents). Finally, when asked to "assume drugs studied in animals accurately predict effects in humans less than 20% of the time. If this were true, it would significantly reduce your support for animal research", 40% disagreed (Table 4). Expectations regarding methodology of AR The majority of respondents agreed that: anesthetic use should be monitored during surgery (98%), pain should be monitored after this surgery even over-night (96%), and experimenters in a research study should have similar training on the procedures involved (96%) ( Table 2). The majority disagreed that it is acceptable: to use less humane methods of euthanasia to reduce costs or improve results (87% or 81%), to use animals when alternatives are available (88%), to do an animal experiment without a systematic literature review (96%), and to do an animal experiment using suboptimal methods (including randomization, blinding, and primary outcome specification) in order to save costs (87-95%). Only a minority of respondents agreed that failed animal models of a disease should continue to be used (27%), or that stressed animals should be used (19%). Finally, the majority agreed that guidelines consistent with these responses should be required for publicly funded AR (91%). Perceptions of the benefits to humans from AR Most respondents believe that discoveries from AR sometimes or often lead to a treatment for human disease directly (84%) or indirectly (88%), and that researchers sometimes or often claim large benefits from AR (97%) ( Table 3). The majority did not agree (87%) with the statement that "AR rarely produces benefits to humans." Expectations for translation to humans from AR paid for with public funding The majority of respondents think that drugs tested on animals should correctly predict the following for humans at least 41% of the time: adverse reactions (85% of respondents), disease treatment (82% of respondents), carcinogenicity or teratogenicity (89% of respondents), and treatment of stroke, severe infection, cancer, brain or spinal cord injury (88% of respondents). The majority also expected that replication of AR findings in second laboratories or other strains of the animal should occur at least 61% of the time (92% and 83% of respondents respectively). The majority agreed that misleading (in terms of human benefit and/or harm) animal experiments should occur at most 40% of the time (84% of respondents). Finally, when asked to "assume drugs studied in animals accurately predict effects in humans less than 20% of the time. If this were true, it would significantly reduce your support for animal research", only 6% disagreed (Table 4). Differences between pediatricians versus nurses/RTs There were few statistically significant differences. Nurses more often responded that drugs for stroke, severe infection, cancer, brain or spinal cord injury should There were no statistically significant differences in responses between pediatricians and nurses/RTs on any of these questions. There was a statistically significant (p < 0.001) difference in response between pediatricians versus nurses/RTs to the question "Some people argue that animal research rarely produces benefits to humans. Do you agree that this is likely?" work in humans. Nurses were more uncertain whether AR "rarely produces benefits to humans", and would be less supportive of AR if it accurately predicted effects in humans <20% of the time. Discussion There are several important findings from this survey. First, most HCW respondents expect that AR is done with high methodological quality, and that costs and difficulty are not acceptable justifications for lower quality. Most expect that guidelines for AR funded with public money should be consistent with these expectations. Second, most respondents thought that there are either sometimes or often large benefits to humans from AR. Most disagreed that "AR rarely produces benefit to humans." Third, most respondents expect that AR findings should translate to humans at least 41% of the time, with many expecting this at least 61% of the time. This includes AR findings of adverse events (toxicity), carcinogenicity and teratogenicity, and disease treatments. The majority thought misleading AR results should occur no more often than 20% of the time. If translation from AR to humans was to occur <20% of the time, most would be less supportive of AR. Finally, most respondents expect that AR findings should be reproducible between laboratories and between strains of the same species. There are important implications of these findings for public and HCW acceptance of AR (Table 5). There was a statistically significant (p < 0.001) difference in response between pediatricians versus nurses/RTs to the two questions: "Drugs that work well in animals with stroke, severe infection, cancer, brain or spinal cord injury should work in humans at least what percent of the time?" and "Assume drugs studied in animals accurately predict effects in humans less than 20% of the time. If this were true, it would significantly reduce your support for animal research." Previous public surveys have generally asked only whether people support AR for human benefit, and not asked people to evaluate the details of their expectations of AR. For example, the Eurobarometer asks "scientists should be allowed to experiment on animals like dogs and monkeys if this can help sort out human health problems"; in 2010, 44% of Europeans responded 'agree' and 37% 'disagree' [44]. This support for AR was linked with "greater appreciation of the contributions of science to the quality of life" and "an omnipotent vision of science" [45]. In the UK the 2012 Ipsos MORI determined that most (85%) are 'conditional acceptors' of AR; people accept AR "so long as it is for medical research purposes", "for life-threatening diseases", "so long as there is no unnecessary suffering", or "where there is no alternative", considering AR as a "necessary evil" for human benefit [41]. In the United States the 2011 Gallup's Values and Beliefs survey found that when asked whether medical testing on animals is morally acceptable or morally wrong, 43% (and 54% of young adults 18-29 yr old) responded 'morally wrong' [46]. In a survey in Sweden including patients with rheumatoid arthritis and scientific expert members of research ethics boards, most respondents agreed to AR for at least some type of biomedical research. Support was highest for AR into "fatal diseases" (83.1%), and diseases with "insufficient treatment options" (82.1%) [47]. In a UK survey of scientists promoting AR, lay public, and animal welfarists, the support for AR (on a Likert scale of 7) was 5.33 (1.46), 3.57 (1.70), and 1.48 (0.87) respectively. Scientists and lay public supported animal use only for "medical research", and not for dissection, personal decoration, or entertainment [48]. These surveys suggest people support AR on the understanding that it is necessary to provide significant benefit for humans with severe diseases, and is done to high ethical standards. However, none asked for the amount of detail as in our survey. Some qualitative research also suggests there is conditional public acceptance of AR based on a utilitarian analysis of costs (to animals) and benefits (to humans) [49,50]. This conditional acceptance is usually based on the assumption that regulation has assured AR is to high animal welfare standards, of high scientific validity and merit (i.e., high quality research, leading to human benefit and cures), and that there are not alternative research methods [49][50][51]. Scientists understand this role of Compatible with recommendations of recent guidelines from the UK, USA, and Canada [63][64][65]. Studies have found poor reporting of animal welfare, including poor attention to pain control, and not using the most acceptable methods of euthanasia [11,12]. AR may need to be of much higher animal welfare quality in order to maintain public and HCW support. AR is done using the best known methods: high standards of methodological quality. b Compatible with recommendations of recent guidelines from the UK, US, and Canada [63][64][65]. AR may need to be of much higher methodological quality in order to maintain public and HCW support. AR often produces benefit to humans. Press releases by academic medical centers often promote AR, and most claim relevance to human health without caveats about extrapolating results to people [66]. Of published basic research papers, 0.004% led to the development of a clinically useful class of drugs [67]. Most HCW may not be aware of the literature regarding translation of AR. AR may need to be much better at predicting human responses to drugs and disease in order to maintain public and HCW support. AR: animal research; HCW: health care workers. a For example, monitoring and titration of anesthesia, monitoring and titration of pain control even over-night, using the most humane known methods of euthanasia, avoiding stressed animals, and using the fewest number of animals possible. b For example, performing a systematic literature review to inform study design, using optimal design including randomization and blinding, attention to training of staff, and to choosing models that have shown translation of findings to humans. c For example, most think translation rate should be over 40%, that misleading results for humans should occur no more than 20% of the time, and that if this was not the case their support for AR would be significantly reduced. regulation as leading to societal acceptance of AR, and see regulation as legitimating AR practice [51][52][53]. However, our survey suggests that this trust in regulation may be misplaced, because regulation does not result in AR that meets HCW expectations for animal welfare, methodological quality, human benefit, or rates of translation to human medicine and cures (Table 5). Moreover, these studies showed that the public is far less accepting of the use of genetically modified animals in research, based on a deontological approach where this AR is seen as 'wrong' [49,50]. We did not ask about the common use of genetically modified animals in AR, and therefore may have underestimated HCW expectations of AR. There are two main explanations for the poor predictive accuracy of AR for humans. First, it is possible that the poor methodological quality of AR has resulted in a biased literature that has led to many human trials based on inappropriate data. Second, it is possible that animal models are not good 'causal analogical models'; not useful to extrapolate findings to humans because there are major causal disanalogies between species [54,55]. Animal models are based on this reasoning: when an animal model is similar to the human with respect to traits/ properties a,b,c [e.g. fever, hypotension, and kidney injury in sepsis], and when the animal model is found to have property d [e.g. response to protein-C treatment], then it is inferred that the human also likely has property d. This 'causal analogy' assumes that there are few causal disanalogies: few properties e,f,g that are unique to either the animal or human and that interact causally with the common properties a,b,c. However, animals are evolved complex systems; they have a myriad of interacting modules at hierarchical levels of organization [56]. As a result of this complexity, animals have emergent properties [e.g. animal traits/functions, like property d] that are dependent on initial conditions [e.g. gene expression profiles, the context of the organism, like properties a,b,c, and e,f,g]. In complex systems [e.g. animals], very small differences in initial conditions [e.g. properties e,f,g specific to a species/strain] can result in dramatic differences in response to the same perturbation [e.g. drug, treatment, or disease leading to property d] [54][55][56][57][58]. There is much empirical data finding major causal disanalogies between animal species: differences in gene expression at baseline and in response to perturbations, and in disease susceptibilities [59][60][61][62]. Thus, complexity science suggests there may be an in principle limitation for AR to predict human responses. Our survey suggests that these competing explanations must be sorted out to determine whether translation can meet public expectations in weighing the costs and benefits of AR. This study has several limitations. Response rates for pediatricians and nurses/RTs were 39% and 58% respectively; thus we cannot rule out biased participation in the survey. Statements presented needed to be short and concise, and this may have left out important details that would have influenced the understanding of and response to the text. The moderate sample size from one University limits the generalizability of our results. Nevertheless, this is the first survey we are aware of that asks any group not just to consider whether they support AR; rather, to consider in detail the expectations for the methodology and translation of AR. Strengths of this study include the rigorous survey development process, and the inclusion of the most common critiques of the empirical practice of AR. Future study should determine the generalizability of our results. Conclusion We found HCW respondents had high expectations for the methodological quality of AR, and the translation of findings from AR to human responses to drugs and disease. These expectations are far higher than the empirical data show having been achieved. This disconnect between HCW expectations of AR and the empirical reality of AR suggests that if HCW were better informed they would likely withdraw their conditional support of AR. Improved methodological quality is an achievable goal if this is prioritized by researchers, reviewers, editors, and funders. Whether methodologically optimal AR can achieve better human translation to meet HCW expectations is an open question.
Preparation and evaluation of floating tablets of pregabalin. Abstract Floating tablets of pregabalin were prepared using different concentrations of the gums (xanthan gum and guar gum), Carbopol 974P NF and HPMC K100. Optimized formulations were studied for physical tests, floating time, swelling behavior, in vitro release studies and stability studies. In vitro drug release was higher for tablet batches containing guar and xanthan gum as compared to the batches containing Carbopol 974P NF. Tablet batches were subjected to stability studies and evaluated by different parameters (drug release, drug content, FTIR and DSC studies). The optimized tablet batch was selected for in vivo pharmacodynamic studies (PTZ induced seizures). The results obtained showed that the onset of jerks and clonus were delayed and extensor phase was abolished with time in treated groups. A significant difference (p > 0.05) was observed in control and treated group behavior indicating an excellent activity of the formulation for a longer period (>12 h). Introduction The major advantages of gastroretentive systems are improved bioavailability of drugs which readily absorbed from GIT 1,2 , while the main limitation of floating systems is the requirement of high amounts of fluid in gastric systems so that formulation can float and work efficiently 3,4 . Gastro-retentive systems are usually appropriate for drugs having certain properties/limitations like local action, good absorption from stomach, poor aqueous solubility, unstability at alkaline pH and/or narrow absorption window 1,[5][6][7] . Pregabalin is an antiepileptic used for treatment of partial seizures, diabetic neuropathy and post-herpetic neuralgia [18][19][20] . Pregabalin has many advantages over other antiepileptic drugs like lack of any pharmacokinetic interaction with other medications or enzyme induction. Pregabalin has a variable absorption through the gastrointestinal tract. It is preferably absorbed in the upper segment of the GI tract. Moreover, the shorter half life of pregabalin (4.5-7 h) too emphasizes the need for a sustained release system 21,22 . Hence, to ensure maximum absorption from stomach with extended period, a gastro-retentive system of pregabalin will be the most preferable dosage form. One of the important criteria for fabrication of gastro-retentive systems is the selection of appropriate controlled drug carriers (polymers/gums). Natural gums are hydrophilic in nature, cost-effective, safe, easily available, biodegradable and biocompatible, hence preferred for the development of matrix based controlled release and delayed release drug delivery system [23][24][25] . Guar gum is a natural gum/polysaccharide (also known a galactomannan) obtained from Cyamopsis tetragonolobus composed of the sugars galactose and mannose. It is widely used as a binder and disintegrant in tablets but also used in controlled release formulation like matrix tablets. Guar gum disperses and swells immediately after putting in water. Guar gum is widely used in food industry as a stabilizer and thickener 26 . Xanthan gum is a polysaccharide secreted by bacterium Xanthomonas campestris produced by the fermentation of glucose, sucrose or lactose. It is used as an emulsifier and stabilizing agent in various formulations. Xanthan gum swells in water and acts as a disintegrating agent in tablets 27 . HPMC is a natural multifunctional carbohydrate polymer, available in various grades and viscosities. It is widely used as a thickener, tablet binder (2-5%), as coating material and controlled release agent in tablets. It retards the rate of drug release from matrix tablets 28 . The present research work deals with the methodology to formulate gastro-retentive floating tablet system (consisting xanthan gum, guar gum, HPMC K100 and Carbopol 974P NF) which can help in controlling the release of pregabalin for an extended period. Materials Pregabalin was a generous gift from Ranbaxy laboratories, Gurgaon, India. 1-Fluoro-2,4-dinitrobenzene, Xanthan gum, guar gum, Carbopol 974P NF and HPMC K100 were purchased from Himedia lab Ltd., Mumbai, India while spray dried lactose DC was purchased from Fisher Scientific, Mumbai, India. All other reagents and chemicals used were of analytical grade. Preparation of powder blends and floating tablets Floating tablets were prepared using gums i.e. xanthan gum and guar gum, HPMC K100 and Carbopol 974P NF. Sodium bicarbonate was used as floating agent. Guar gum (6.6-26.7%), xanthan gum (6.7-35.7%), HPMC K100 (8.3-46%) and Carbopol 974P NF (15.3-20%) were used as extended release polymers as well as a binder. Magnesium stearate (1%) and talc (1.8-2%) were used as lubricants and glidants, respectively. Spray dried lactose DC was used as directly compressible excipient. The method used for tablet preparation was direct compression. Floating tablets were prepared by mixing API and all other excipient except lubricant and glidant and then sieving the powder blend to obtain uniform particle size. Then lubricant and glidant was added to blend and mixed for 15 min using 11 mm punches. Powder blend equivalent to tablet weight was weighed individually and compressed on rotary tablet press. Table 1 shows the composition of tablets prepared. Powder blends Flow properties. Flow rate and angle of repose were determined. For determining flow rate, known weight of powder blend was poured into a funnel and the time required to pass through it was recorded. The flow rate was calculated as quotient of weight of drug and time in seconds. Angle of repose is a measure of flow properties of powder or pellets. The powder blend was poured gently through a funnel, which was fixed at a position such that its lower tip was at the height exactly 2 cm above a hard surface. The powder blend was poured until the upper tip of the pile surface touched the lower tip of the funnel. The tan À1 of (height of the pile/radius of its base) gave the angle of repose. Tapped and bulk densities. Powder blend was poured gently through a glass funnel into a 10 mL graduated cylinder until powder blend just touched the 10 mL mark and the weight of cylinder required for filling the cylinder volume was calculated. The cylinder was then tapped from the height of 2 cm until there was no more volume change. Bulk density was calculated as the quotient of the weight of pellets and volume of cylinder used. Tapped density was calculated as the quotient of the weight of the powder and its volume after tapping. Physiochemical interaction. To determine the compatibility of pregabalin with different excipient FTIR spectroscopy and DSC thermal analysis were performed. FTIR spectra were obtained using a KBr pellet in FTIR spectrophotometer (Shimadzu-8400S, Kyoto, Japan). Transmittance (%T) was recorded in the spectral region of 500-4500 cm À1 using a resolution of 4 cm À1 and 40 scans. DSC analysis was used to characterize the drug by examining endothermal transitions from the thermogram obtained. It involved the heating signal to a sample and a reference. DSC (TA Instruments, New Castle, DE) analyses were carried out on a nitrogen flow of 50 mL/min and a heating rate 10 C/min from 20 to 300 C. Tablet batches Visual inspection. Visual inspection of the tablets was done to check the surface texture of the tablets. Tablets were observed for various defects for example capping (partial and complete separation of top or bottom crown of tablets), lamination (laminar separation of tablet layers), mottling (unequal distribution of color in tablets), picking and sticking. Weight variation. Twenty tablets were taken from each batch. Each tablet was weighed individually and average weight of the tablet batch was calculated. Then percentage deviation from average weight was calculated for each tablet as per IP 2010. Diameter and thickness. Diameter and thickness were measured for all batches with a vernier calliper. This helps in determining the uniformity of the tablet batches. Hardness and friability. Hardness of tablet batch was tested by Monsanto hardness tester. Hardness of the tablets should be 3-8 kg/cm 2 . Friability of tablet batch was tested using Roche's friabilator. Tablet weight 6-6.5 g was taken in Roche's friabilator and rotated for 100 revolutions at 25 rpm. After friability tablets were dusted to remove, any powder adhered to tablets and was weighed. Percentage friability was calculated as given in formula: Percentage friability for new and old batches should be below 0.8 and 1%, respectively 29 . Drug content. Ten tablets were powdered in a mortar pestle. Tablet weight equivalent to 10 mg was taken and dissolved in 100 mL water in a volumetric flask. To 1 mL of stock solution of drug, 0.4 mL of 0.00117 M FDNB (1-fluoro-2,4-dinitrobenzene) and 1 mL of borate buffer was added. Test tubes were heated in a water bath for 45 min at 60 C. After cooling, 0.15 mL of 1 N HCl was added and the volume was made up to 10 mL with acetonitrile. Absorbance was taken in UV-visible spectrophotometer against suitable blank. All determinations were done in triplicate. FDNB was used for analysis (UV-visible spectrophotometer) of pregabalin only as the drug (pregabalin) does not contain any chromophore group. FDNB is a common reagent used for analysis of amino acids or compounds containing free amine group 30 . Swelling index and floating time. Tablets were taken in a beaker containing 150 mL of 0.1 N HCl. Tablets were weighed before putting in the beaker (which was taken as initial weight). The swelled tablets were taken out and weighed after blotting at 1, 2, 4, 6, 8, 10 and 24 h. Swelling index was calculated using the formula: Floating time was calculated as the time taken by tablet to float in 0.1 N HCl and taken as lag time 31 . Dissolution studies. The dissolution studies were performed with USP dissolution apparatus type 2 (at 50 rpm) using 0.06 N HCl as the dissolution medium. Samples were taken at different time intervals and analyzed using a UV spectrophotometer ( max ¼ 355 nm) after derivatizing with Sanger's reagent. Derivatization was done by adding 1 mL of dissolution sample with 0.4 mL of Sanger's reagent and 1 mL borate buffer. The mixture was then heated for 45 min at 60 C. After cooling, 0.15 mL of 1 N HCl was added and the volume was made up to 10 mL with acetonitrile. Dissolution studies were performed in triplicate and the average drug released was calculated. Stability studies. Stability studies were conducted to determine the effect of time and storage conditions on the physical and chemical stability of the product. Stability program is designed to determine the shelf-life or expiry date of the product under normal storage conditions in its intended package. Stability testing of the optimized tablet batch was performed by storing the tablet at 40 C and 65% relative humidity in stability chamber for one month. Samples of the tablet batch were taken at different time points (0, 10, 20 and 30 days) and analyzed by dissolution studies, drug content estimation, FTIR spectrum and DSC spectra studies. In vivo pharmacodynamic studies. Pentylenetetrazol (PTZ), a non-competitive GABA antagonist is used in seizure assays as a method of assessing the excitability of the central nervous system and GABA activity. This model is highly sensitive, and hence preferred widely for comparing different chemicals under standardized conditions 32 . PTZ test was performed to determine the pharmacodynamic activity of test formulation. Onsets of jerks, onsets of seizures, duration of extensor and death or recovery were noted in rats. All animal experiments were carried out after approval of the protocol by the Institutional Animal Ethical Care Committee (IAEC), Panjab University, Chandigarh, India, and conducted according to the Indian National Science Academy (INSA) guidelines for the use and care of experimental animals. SD rats (150-200 g) were fasted overnight for PTZ test. Each animal received the oral dose (2.8 mg/kg) of the test formulation (50 mg mini-tablets) following intraperitoneal administration of PTZ (65 mg/kg) after 1 h. Animals were observed carefully to check the onset of jerks, onset of clonus, duration of extensor and death or recovery. The same procedure was followed after 4, 8, 10 and 12 h following oral treatment with test formulation. Statistical analysis. Simple analysis of variance (one-way ANOVA, GraphPad Prism 5, GraphPad Software Inc., La Jolla, CA) was used to determine statistically significant differences between the results and values with p50.05 were considered statistically significant as analyzed by the Dunnett multiple comparison test. Flow properties Angle of repose for tablet batches ranged between 29.391 ± 0.663 and 30.112 ± 0.742 as shown in Table 2 which indicated the excellent flow of powder blend. Flowability of powder is of immense importance in the production of pharmaceutical dosage forms like tablets. This ensures uniform/reproducible filling of tablet dies which improves weight uniformity and allows the tablet to produce more consistent physico-chemical properties. Angle of repose is a constant three-dimensional angle relative to horizontal base of a cone like pile formed. Angle of repose between 25 and 30 depicts excellent flow properties 33 . Tapped and bulk densities The tapped and bulk density of powder blends ranged from 0.829 ± 0.029 to 0.988 ± 0.052 g/cm 3 and 0.621 ± 0.031 to 0.753 ± 0.018 g/cm 3 , respectively ( Table 2). Hausner ratio for different powder blends ranged between 1.306 ± 0.019 and 1.452 ± 0.147. The Carr index for the powder blends of the tablet batches were founded between 0.218 ± 0.038 and 0.241 ± 0.069 as shown in Table 2. According to USP 2011, value of Carr index must be less than 10 for excellent flow properties of powders 33 . Bulk density determines the packing properties of powder while and Hausner ratio and Carr index indicates the flow behavior of powder. Physiochemical interaction No significant interactions were found between pregabalin and the excipient (gums and polymers) as confirmed by FTIR spectra and DSC thermogram. FTIR spectra of the physical mixture (drug + excipient) retains all important peaks of drug like -N-H stretching (2956.92 cm À1 ), -OH stretching (2602.77 cm À1 ) and -C ¼ O (1645.23 cm À1 ; Supplementary Figure S1). Thermograms of physical mixture (pregabalin + excipient) retained the sharp endotherm of the drug which showed the absence of any significant interactions between the drug and excipient (Supplementary Figure S2). Table 3. Swelling index Swelling index is an important criterion in controlled release systems. Due to swelling of various polymers present in the formulation, drug release is prolonged. Drug diffusion is controlled by gel diffusion barrier 34 . Swelling index of various tablet batches (which passed the physical evaluations test) were determined by weighing the tablet at different time intervals after putting it into 0.1 N HCl at 37 C. Batches B2, B5 and B6 were dissolved within 3 h in the disintegrating medium while other batches exhibited good swelling behavior. Experimental data showed that swelling index of tablet batches (B8 and B10) with Carbopol 974P NF and HPMC K100 was higher as compared to tablet batches with gums and HPMC K100 (batch B7 and B9; Figure 1). Due to swelling of gums, Carbopol 974P NF and HPMC K100 tablets showed drug release for extended periods. Gums and Carbopol 974P NF present in matrix tablets retarded the drug release and prevented burst effect. However, decrease in concentration of gums, Carbopol 974P NF and HPMC K100 content in tablet batches increased the drug release in the medium. By decreasing the ratio of Carbopol 974P NF and HPMC K100 content (B8 and B10), swelling index was found to be decreased in comparison to B7 and B9. Swelling index of batch B9 was found to be slightly lower than batch B7 because of the low percentage of guar gum in B9 (21.4%) as compared to B7 (26.7%). Floating lag time of tablet batches ranged between 2 and 35 min while duration of floating for different batches ranged between 15 min and 24 h. Figure 2 depicts the swelling behavior and floating property of pregabalin tablets at various time points. In vitro drug release studies In vitro drug release studies of tablet batches containing gums and HPMC were performed in water using USP dissolution apparatus 2, i.e. paddle type apparatus at 50 rpm and 37 C. Release kinetics of these batches exhibited linearity with the Korsmeyer-Peppas model (r 2 40.9) suggesting the release of drug from tablet occurred by diffusion through a matrix system. It was observed that tablet batch prepared with natural gums (guar gum and xanthan gum), i.e. batch B7 and B9 showed higher cumulative drug release as compared to tablet batches prepared with synthetic polymers like Carbopol 974P NF and HPMC (B8 and B10). The reason might be the high swelling capacity of Carbopol 974P NF as compared to gums, which resulted in a lower drug release rate. Moreover, natural gums like guar gum and xanthan gum also act as a disintegrating agent that resulted in high drug release as compared to batches prepared with Carbopol 974P NF (B8 and B10). The above observations suggested B9 as the most optimized batch, which was further selected for stability studies and animal activities. Stability studies Stability testing of tablet batch B9 was performed by storing the tablet batches at 40 C and 65% relative humidity in stability Dissolution studies Dissolution was performed to observe any increase or decrease in drug release from tablet batches after different time intervals (0, 10, 20 and 30 days). No statistically significant difference (p40.05) was observed during one month period in drug release profile of batch B9 (Figure 4). Drug content Drug content of tablet batch B9 kept for stability studies was found to be 98.25 ± 0.81% after 30 days of storage indicating that the drug was stable in developed batch B9. Physiochemical interaction FTIR spectra showed no change in functional group peaks of the drug which indicated the stability of batch B9. Similarly, DSC thermogram of batch B9 showed insignificant changes in endothermic peaks ( Supplementary Figures S3 and S4). In vivo pharmacodynamic studies Animals were treated with pregabalin tablets (batch B9) except control group and PTZ injection was given after 1 h (Group 1), 4 h (Group 2), 8 h (Group 3), 10 h (Group 4) and 12 h (Group 5) and observations were noted. Control group (Group 1) showed the faster onset of jerks and clonus as compared to treated group, i.e. 76 ± 7.94 and 108.33 ± 10.41 s, respectively. Duration of extensor phase was found to be 237.3 ± 56.05 s. Severity of jerks and clonus was very high in the control group. All the animals of the control group died within an hour of the PTZ injection (100% mortality). Group 1 treated with the formulation B9 showed late onset of jerks (160.33 ± 17.62 s) and clonus (211 ± 7.33 s), respectively, after 1 h of oral dosing. Duration of extensor was found to be 43 ± 6.32 s after 1 h of oral dosing with formulation B9. In group 2 (after 4 h), the Onset of jerks and clonus after 4 h (Group 2) was found to be 85 ± 5.30 s and 273 ± 1.68 s, respectively, with no extensor phase. The onset of jerks and clonus after 10 h (Group 4) was found to be 234.33 ± 9.90 and 360 ± 55.8 s, respectively, with no extensor phase. Severity of jerks and clonus decreased after 4 h (Group 2). After 12 h of oral dosing, no jerks, clonus and extensor phase was observed in treated animals and hence, showed no mortality. Therefore, data showed that tablet batch B9 gave prolonged effect up to 12 h in treatment of seizures. Statistically significant difference (p50.05) was observed in animals treated with tablet batch B9 and control group after 1, 4, 8 10 and 12 h ( Figure 5). It was clear from the observations that all the phases of epilepsy diminished after oral dosing of pregabalin tablet (batch B9) as the formulation exhibited the extended pharmacodynamic efficacy against PTZ induced seizures. Thus, formulation batch B9 containing guar gum (21.4%), xanthan gum (7.1%) and HPMC K100 (8.9%) was found to be stable and highly effective in the treatment of partial seizures. Conclusion Floating tablets of pregabalin were successfully prepared (using different concentrations of gums and polymers) and evaluated. Tablet batch B9 (primarily consisting of natural gums) exhibited excellent properties like higher floating time (424 h with a lag time of about 7 min) with extended drug release profile and good stability. Furthermore, it displayed excellent efficacy against partial seizures for prolonged periods. Besides excellent biocompatibility, natural gums also serve the properties of binder, disintegrant, emulsifier and stabilizer. Hence, exploitation of natural gums for fabrication of floating tablets could be highly beneficial in controlling the release of drugs from the designed matrix systems.
A Depth-Adaptive Waveform Decomposition Method for Airborne LiDAR Bathymetry Airborne LiDAR bathymetry (ALB) has shown great potential in shallow water and coastal mapping. However, due to the variability of the waveforms, it is hard to detect the signals from the received waveforms with a single algorithm. This study proposed a depth-adaptive waveform decomposition method to fit the waveforms of different depths with different models. In the proposed method, waveforms are divided into two categories based on the water depth, labeled as “shallow water (SW)” and “deep water (DW)”. An empirical waveform model (EW) based on the calibration waveform is constructed for SW waveform decomposition which is more suitable than classical models, and an exponential function with second-order polynomial model (EFSP) is proposed for DW waveform decomposition which performs better than the quadrilateral model. In solving the model’s parameters, a trust region algorithm is introduced to improve the probability of convergence. The proposed method is tested on two field datasets and two simulated datasets to assess the accuracy of the water surface detected in the shallow water and water bottom detected in the deep water. The experimental results show that, compared with the traditional methods, the proposed method performs best, with a high signal detection rate (99.11% in shallow water and 74.64% in deep water), low RMSE (0.09 m for water surface and 0.11 m for water bottom) and wide bathymetric range (0.22 m to 40.49 m). Introduction Airborne LiDAR bathymetry (ALB) is a technique for measuring the depths of moderately clear, near-shore coastal waters and lakes with a high-powered, high-pulsed green laser from a low-altitude aircraft (200-500 m above ground level (AGL)) [1][2][3]. This technique can provide high-density, high-accuracy, and three-dimensional bathymetric data safely and with high efficiency compared with shipborne sonar. The swath width of an ALB system roughly ranges from 0.5-0.75 times AGL, namely 100-400 m, indicating that the area covered within 1 h ranges from about 20-60 km 2 [4]. Since ALB is cost-effective and time-saving, it has been widely used in coastal mapping, sediment budgets and seabed protection [5,6]. Since the transmission of green laser involves air and water, the waveform processing for the ALB system sometimes could be challenging. Water absorption and scattering influence the extraction of the water surface and bottom returns from the waveform of a green channel, especially for turbid 1. Mixed peaks in surface return: The waveforms are the convolution of the emitted pulse and the target cross section, and are digitized by the receiver. The limited full width at half maximum (FWHM) of the emitted pulse and sampling rate in the LiDAR digitizer induce the peak stretching, leading to a mixed peak of surface return and water column scattering. Especially when the water is extremely shallow, the bottom return will also be included in the mixed peak. Taking the mixed peak as the surface return may introduce errors ranging from 10 cm to 25 cm [9]. 2. Weak bottom return in deep or turbid water: The pulse energy decreases exponentially with depth in the water column, and the decrease rate is positively associated with water turbidity, resulting in a rather weak bottom return in deep or turbid water [10]. For the first existing problem, some ALB systems use an NIR channel or a Raman channel to measure the surface return. Pe'eri and Philpot [11] studied the relationship between the shape of the Raman waveform and the water depth and used the Raman channel to measure the water depths (shallower than 2 m). However, this algorithm may vary with water clarity. Allouis et al. [12] found that the Raman signal is not reliable for depth estimation, since the Raman signal is sensitive to water characteristics and proposed a depth estimation method using NIR and green fitted waveforms. Another alternative is to establish a water surface model to determine the position deviation between the mixed peak and the water surface, so that the single green laser ALB systems can be used to estimate the water depth. Mandlburger et al. [9] analyzed the near water surface penetration (NWSP) properties of green laser signals. Zhao et al. [13] studied the factors that influence the NWSP of a green laser and proposed an NWSP modeling method. This method needs an auxiliary NIR laser and some SSC sampling stations, which sometimes are difficult to obtain, and so waveform decomposition may be the optimal solution to this problem. For topographic LiDAR, the Gaussian model is sufficient for most applications [14]. However, the Gaussian decomposition method is not suitable for ALB because the water column component in the waveform cannot be easily fitted by Gaussian functions [15,16]. The triangular function [17] and quadrilateral function [18] were both introduced to water column fitting but were only verified by simulated data. Ding et al. [19] proposed an improved quadrilateral model for the water column fitting which shows a better fit to the field data compared with the quadrilateral function. For very shallow water, a surface-volume-bottom (SVB) algorithm was proposed by Schwarz et al. [20] and was applied to measure a riverbed. However, waveforms at different depths vary greatly, and a single model is only suitable for waveforms in a specific depth range. For the second problem, the key is detecting the bottom signal from low-SNR waveforms. Numerous waveform processing methods have been developed for waveform de-noising or signal enhancement. Saylam et al. [21] used a moving-average filtering algorithm for waveform smoothing. Pan et al. [7] introduced a continuous wavelet transformation (CWT) to project the signal into a continuous time and scale subspace and found that more signals can be detected from the reconstructed waveform while many signals are still undetectable. Wang et al. [15] examined some waveform processing methods and concluded that the Richardson-Lucy deconvolution (RLD) [22] is good at dealing with waveforms with a very shallow depth and a weak bottom response, and the average square difference function (ASDF) [23] can better cope with noise. Richter et al. [24] proposed an attenuation correction procedure to improve the detectability of water bottom signals. Launeau et al. [25] proposed a waveform processing method including smoothing and edge enhancing to increase the detection rate of the bottom signal in moderately turbid water. The de-noising and signal-enhancing methods can improve the detection rate, but the detection accuracy is still limited by the waveform sampling interval, which may also require waveform decomposition. Although the previous methods have partially solved the above problems, there is no single best waveform processing strategy across all applications [7,26]. The received waveform influenced by The Mapper5000 System and Study Area The field data were collected by an airborne topo-bathymetric LiDAR (Mapper5000) system in the Qilianyu Islands, Hainan Province, China. This ALB system was developed by the Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Science. The system performed an elliptical scan at a scan angle of ±15 • giving a field of view (FOV) of 30 • over a range of 16.8 km × 2.2 km, and the detailed parameters are shown in Table 1. The sensor consists of four channels, including one NIR channel (1064 nm, avalanche photodiode (APD)) and three green channels (532 nm, photomultiplier tube (PMT)). These three green channels are set differently. PMT1 and PMT2 have a shallow field angle, while PMT3 has a wide field angle. The instantaneous field of view (IFOV) of NIR, PMT1 and PMT2 is 6 mrad, and the IFOV of PMT3 is 6-40 mrad. The receiving directions of PMT1 and PMT2 are perpendicular to each other. Because signals in PMT1 waveforms are generally stable, this study focuses on the processing of PMT1 waveforms, and waveforms in other channels are used as references. Besides this, the calibration waveforms collected in the laboratory are utilized as the approximation of the emitted signal. With the detected bathymetric signals, point clouds can be generated by a data processing software. This data processing software is designed for the Mapper5000 system including geo-calibration and refraction correction for the point clouds. Detailed descriptions of the Mapper5000 system and its point clouds generation software are appended in Appendices A and B, respectively. For the statistical analysis of waveforms, we choose four bathymetric points (P1-P4) with different depths. In order to access the performance of the proposed waveform decomposition method, especially in areas where the water depth is deep or shallow, two representative datasets, Dataset 1 and Dataset 2, were selected from two different strips. Figure 1 shows the locations of the strips, the datasets and bathymetric points in the study area. Workflow The original PMT1 waveforms are processed with the depth-adaptive waveform decomposition method to extract the accurate positions of the bathymetric signals. As shown in Figure 2, this is a multistep process including preprocessing, signal detection and waveform decomposition. The first step was to determine the useful range of the waveforms, classify the waveforms by depth and improve the signal resolution by RLD. After deconvolution, the second step was to detect the signals with an adaptive threshold, and the results were used as the initial values of the following waveform decomposition. PMT1 waveforms were classified into two categories, "shallow water (SW)" and "deep water (DW)". In the last step, the fitting model was selected based on the waveform category. With the initial values provided by the second step, the model parameters were solved by the TR algorithm. After waveform decomposition, the surface and bottom signals were extracted from the waveforms. To evaluate the accuracy of the results, signals detected from NIR waveforms and PMT3 waveforms were used as reference data. Workflow The original PMT1 waveforms are processed with the depth-adaptive waveform decomposition method to extract the accurate positions of the bathymetric signals. As shown in Figure 2, this is a multistep process including preprocessing, signal detection and waveform decomposition. The first step was to determine the useful range of the waveforms, classify the waveforms by depth and improve the signal resolution by RLD. After deconvolution, the second step was to detect the signals with an adaptive threshold, and the results were used as the initial values of the following waveform decomposition. PMT1 waveforms were classified into two categories, "shallow water (SW)" and "deep water (DW)". In the last step, the fitting model was selected based on the waveform category. With the initial values provided by the second step, the model parameters were solved by the TR algorithm. After waveform decomposition, the surface and bottom signals were extracted from the waveforms. To evaluate the accuracy of the results, signals detected from NIR waveforms and PMT3 waveforms were used as reference data. Useful Range To ensure the ALB system can measure both land and water, the system normally records thousands of samples for the received waveform, while the signals only exist in 0.8-5% of it. Thus, if the range of the signals, which is known as the "useful range", can be determined, the waveform processing will be more efficient. Jutzi and Stilla [27] proposed a criterion that if the waveform is three times higher than the noise power for at least 5 ns, a signal will be assumed to have been found. In this paper, we choose the last 10% of the waveform to estimate the noise. According to the maximum measurable depth and flight height of the system, signals cannot exist in this range. The truncation noise NT and the noise power NP can be estimated using the minimum amplitude and the standard deviation of the waveform in this range, respectively. The noise level NL can be expressed as follows: By searching the waveform using a criterion that a waveform higher than NL and lasting for no less than 5 ns may indicate a signal, the useful range can be determined as shown in Figure 3. Useful Range To ensure the ALB system can measure both land and water, the system normally records thousands of samples for the received waveform, while the signals only exist in 0.8-5% of it. Thus, if the range of the signals, which is known as the "useful range", can be determined, the waveform processing will be more efficient. Jutzi and Stilla [27] proposed a criterion that if the waveform is three times higher than the noise power for at least 5 ns, a signal will be assumed to have been found. In this paper, we choose the last 10% of the waveform to estimate the noise. According to the maximum measurable depth and flight height of the system, signals cannot exist in this range. The truncation noise N T and the noise power N P can be estimated using the minimum amplitude and the standard deviation of the waveform in this range, respectively. The noise level N L can be expressed as follows: By searching the waveform using a criterion that a waveform higher than N L and lasting for no less than 5 ns may indicate a signal, the useful range can be determined as shown in Figure 3. Waveform Classification Received waveforms of the green LiDAR are varied with depths as shown in Figure 4. To fit the waveforms with different depths, waveforms are grouped into two categories, "shallow water" and "deep water", according to a defined parameter S. Because the depths of the water that results in the surface and bottom signal overlap vary with different ALB systems or measured area, it is difficult to classify waveforms with an absolute water depth threshold. Based on the waveform analysis, we found that the shape of the water column scattering in the received waveform is almost fixed in a small measured area (See Section 3.1.1). If the water column scattering can be captured in the waveform, it also indicates that there is no overlap between the water surface and the bottom signal. Thus, the water column scattering can be used as a sign to classify the waveforms. In this study, shallow water and deep water are not distinguished by absolute water depth but by waveform shape. If the surface and bottom signal overlap in the waveform, it is defined as "SW"; otherwise, the waveform is labeled as "DW". The parameter S is determined by the following equations: where wR is the received waveform, wC is a section of the water column scattering truncated from the received waveforms, N is the sampling number of wC, and τ is the sampling interval of the system. Because the received waveforms in extremely deep water have complete water column scattering (see Figure 4c), wC can be easily obtained from them. Specific details are given in Section 3.1.1. For the "SW" waveform, the water column scattering is embedded by surface and bottom signals, and so the received waveform is less similar to wC, as shown in Figure 5a. Because water column reflectance varies little in a small measured area, the "DW" waveform will have a small S, as shown in Figure 5b. Thus, received waveforms in which S is less than a threshold (TS) are classified as "DW", while the remaining waveforms are "SW". Waveform Classification Received waveforms of the green LiDAR are varied with depths as shown in Figure 4. To fit the waveforms with different depths, waveforms are grouped into two categories, "shallow water" and "deep water", according to a defined parameter S. Because the depths of the water that results in the surface and bottom signal overlap vary with different ALB systems or measured area, it is difficult to classify waveforms with an absolute water depth threshold. Based on the waveform analysis, we found that the shape of the water column scattering in the received waveform is almost fixed in a small measured area (See Section 3.1.1). If the water column scattering can be captured in the waveform, it also indicates that there is no overlap between the water surface and the bottom signal. Thus, the water column scattering can be used as a sign to classify the waveforms. In this study, shallow water and deep water are not distinguished by absolute water depth but by waveform shape. If the surface and bottom signal overlap in the waveform, it is defined as "SW"; otherwise, the waveform is labeled as "DW". The parameter S is determined by the following equations: where w R is the received waveform, w C is a section of the water column scattering truncated from the received waveforms, N is the sampling number of w C , and τ is the sampling interval of the system. Because the received waveforms in extremely deep water have complete water column scattering (see Figure 4c), w C can be easily obtained from them. Specific details are given in Section 3.1.1. For the "SW" waveform, the water column scattering is embedded by surface and bottom signals, and so the received waveform is less similar to w C , as shown in Figure 5a. Because water column reflectance varies little in a small measured area, the "DW" waveform will have a small S, as shown in Figure 5b. Thus, received waveforms in which S is less than a threshold (T S ) are classified as "DW", while the remaining waveforms are "SW". Richardson-Lucy Deconvolution Waveforms are processed by RLD to increase the signal resolution. Considering that most of the waveforms are likely to have mixed peaks, especially for the first peak which may contains surface and column return, this overlap mostly results from a convolution of the emitted waveform and the target cross section. Thus, a deconvolution process is needed before detecting the signals. RLD is an iterative deconvolution method which was developed by Richardson [28] and Lucy [29] to recover a blurred image with a known point spread function (PSF). Many studies [22,30,31] have applied this to LiDAR waveform processing in the time domain to estimate the target cross section, where the emitted waveform is regarded as a PSF. The ith iteration can be calculated as where " * " is the convolution product, w T (t) is the emitted waveform which is approximated by the calibration waveform in this paper, w R (t) is the received waveform, and p(t) i+1 is the ith iteration solution of the target cross section. The initial value of p(t) for the iteration is simply set as w R (t), which has negligible effect on the result. Determining the end point of the iteration is the key of RLD because the noise amplification is due to overfitting. The stopping criterion of RLD used here is detailed discussed in [32]. Figure 6 shows that RLD can improve the signal resolution whether in shallow water or deep water waveforms. In shallow water waveforms, RLD can shorten the FWHM of the surface and bottom signals (see Figure 6a). In deep water waveforms, RLD can partially remove the background noise while keeping the weak bottom signal (see Figure 6b). Comparison of the conventional signal detection methods, where "threshold1" used a threshold that was equal to one-tenth of the maximum amplitude of wR(t) (MA), "threshold2" used a threshold that was equal to 0.05 MA, "maximum1" used a threshold that was equal to 0.2 MA, and "maximum2" used a threshold that was equal to 0.1 MA. Figure 8. Thresholds used in signal detection. The high-intensity fake signals are located in water column scattering, which may be detected by the maximum method with a fixed threshold. Signal Detection A number of signal detection methods such as threshold, center of gravity and maximum are often used in conventional LiDAR waveform processing [33]. However, most of them may not work for w R (t) in ALB. We tested some of the conventional signal detection methods, as shown in Figure 7. It could be determined that the center of gravity method may not be suitable for ALB signal detection, due to the component of water column scattering in the received waveform. The threshold and maximum methods can give more reliable results with an appropriate threshold. However, the appropriate threshold may vary in different waveforms. Comparison of the conventional signal detection methods, where "threshold1" used a threshold that was equal to one-tenth of the maximum amplitude of wR(t) (MA), "threshold2" used a threshold that was equal to 0.05 MA, "maximum1" used a threshold that was equal to 0.2 MA, and "maximum2" used a threshold that was equal to 0.1 MA. Figure 8. Thresholds used in signal detection. The high-intensity fake signals are located in water column scattering, which may be detected by the maximum method with a fixed threshold. Comparison of the conventional signal detection methods, where "threshold1" used a threshold that was equal to one-tenth of the maximum amplitude of w R (t) (M A ), "threshold2" used a threshold that was equal to 0.05 M A , "maximum1" used a threshold that was equal to 0.2 M A , and "maximum2" used a threshold that was equal to 0.1 M A . In this section, an adaptive threshold was used for ALB signal detection instead of a fixed one, as shown in Figure 8. The adaptive threshold T(t) is defined as where t S is the minimum point corresponding to the minimum value S of R(t). As the water depth changes, T(t) will be adjusted reasonably, effectively avoiding the influence of high-intensity fake signals on signal detection and improving the reliability of the results. Candidate signals in deconvoluted waveforms are detected with the maximum method and filtered by T(t). If there are more than two remaining signals, the two signals with the largest amplitude are retained. Figure 7. Comparison of the conventional signal detection methods, where "threshold1" used a threshold that was equal to one-tenth of the maximum amplitude of wR(t) (MA), "threshold2" used a threshold that was equal to 0.05 MA, "maximum1" used a threshold that was equal to 0.2 MA, and "maximum2" used a threshold that was equal to 0.1 MA. Figure 8. Thresholds used in signal detection. The high-intensity fake signals are located in water column scattering, which may be detected by the maximum method with a fixed threshold. Waveform Decomposition There are three basic steps in waveform decomposition, including modeling, initialization and fitting. The waveform processed here is denoted by w(t), which is the original received waveform truncated at noise level N L . Modeling The key to waveform decomposition is to build a reasonable model. Some models for the ALB waveform have been introduced, such as two Gaussian functions [12], a combination of a Gaussian function, a triangle function and a Weibull function [17], a combination of two Gaussian functions and a quadrilateral function [18], and a chain of exponential segments [34]. Considering that water depth is an important factor influencing the shape of waveform (as shown in Figure 4), two models are proposed for "SW" and "DW" waveforms, respectively. The fitting model f W (t) can be expressed as For the surface return model f S (t) and bottom return model f B (t), the existing models use a specific function, such as Gaussian, exponential and harmonic function, to model the surface and bottom return, but are not exact in practice because of the distortion caused by the sensor response, as shown in Figure 9. Thus, we do not use a model with some defined functions, but directly use a transformation C of the calibration waveform to better fit the surface and bottom return: where A is the amplitude scaling factor, µ is the time shift factor, σ is the time scaling factor, and ϕ is the normalized calibration waveform. The calibration waveform used here can be collected in the laboratory or substituted by a waveform received from bare ground, and ϕ is estimated using the smoothing spline method, normalized in amplitude, and shifted in time to locate the peak at the zero point. The surface return model f S (t) is given by where A S , µ S and σ S are the amplitude scaling factor, time shift factor and time scaling factor in function C, respectively. The bottom return model f B (t) is given by where A B , µ B and σ B are the amplitude scaling factor, time shift factor and time scaling factor in function C, respectively. For "SW" waveforms, the water column scattering model f C1 (t) is equal to where A C , µ C and σ C are the amplitude scaling factor, time shift factor and time scaling factor in function C, respectively. For "SW" waveforms, the proposed model f W (t) is based on the calibration waveform and is therefore named the empirical waveform model (EW). For "DW" waveforms, f C (t) is defined as where a, b, c and d are the horizontal coordinates of four boundary points in f C1 (t), as shown in Figure 10, and f, g and h are coefficients related to water column scattering. Here, an exponential function with a second-order polynomial is proposed to improve the quadrilateral model presented in [19]. Hence, this model is named the exponential function with second-order polynomial model (EFSP). Thus, the proposed model f W (t) can be denoted by f W (t, γ) with the unknown parameter vector or Figure 9. Calibration waveform (solid) and fitting curves (dashed) with Gaussian, exponential and harmonic models. The Gaussian model includes only one Gaussian function. The exponential model is a sum of two exponential functions, and the harmonic model is a combination of four harmonic functions. All fitting parameters were solved by a Levenberg-Marquardt algorithm, and the adjusted R-Square (R 2 ) coefficients of Gaussian, exponential and harmonic models are 0.9332, 0.8015 and 0.9956 respectively. Initialization Initialization is a significant process in waveform decomposition. Waveform decomposition is a nonlinear nonnegative least-squares problem, which can be solved by many algorithms, but global convergence cannot be promised, especially when there are a large number of parameters to solve. The first step of initialization is to calculate the initial value γ 0 . The results of signal detection in Section 2.4, namely the rough positions of the surface and bottom signals denoted by t S0 and t B0 , are used for calculating the initial values of the parameters in γ: where t L is the length of ϕ to the left of the peak, and t R is the length of ϕ to the right of the peak. If only surface signal is detected, the value of t B0 will be set as t S0 + 1/2 × t L . In the EFSP model, the last three parameters in γ are estimated by a simple linear fitting: where the fitting range is set at [t S0 + t R , t B0 − t L ], because the surface and bottom returns have a negligible effect on this part of the waveform. Fitting Regarded as a non-linear least-squares problem, the step of fitting is dealt with by many mature theories and algorithms, such as the Gauss-Newton algorithm (GN), Expectation-Maximization algorithm (EM) [35], Levenberg-Marquardt algorithm (LM) [36,37] and Reversible Jump Monte Carlo Markov Chain (RJMCMC) [38]. However, GN, the most traditional method, easily converges at a local optimal solution, and its modification, LM, although it has a global convergence to some degree, may be influenced by the initial value in practice. Thus, the TR algorithm is introduced in this study. Unlike the other algorithms mentioned above, TR can solve a problem with constraints, indicating that the parameters will be calculated in a reasonable range. Furthermore, TR is not a line search algorithm which is carried out along a search direction in each iteration; instead, TR searches the next iterated point in a trust region, which is a neighborhood of the current iterate point. The principle is described below [39]. The cost function Q(γ) is expressed as: where n is the number of sampling points in the useful range, and m is the number of parameters in γ. At the kth iteration, Q(γ) is expanded in Tailor at the current iteration point γ (k) with the second-order terms preserved We plug d = γ − γ (k) into Equation (17) and get the quadratic form: The trust region in the current iteration can be expressed as d ≤ r k , where r k is the trust region radius. As the range of γ is given, we can limit the solution in a reasonable range by setting r k . Thus, the cost function in Equation (16) will be translated into solving the following trust region subproblem: By solving Equation (19) using a line searching algorithm, the optimal solution d (k) is obtained. The key part of a TR algorithm is how to judge d (k) ; that is, whether to accept the current improvement d (k) and how to change the trust domain radius r k . According to the chosen strategy, the TR method has many forms [40]. This paper only uses one of them; that is, the correctness of d (k) is judged according to the ratio of the actual decrease of the function value to the predicted decrease [41]: If ρ k ≤ 0, then d (k) approximation fails, and we let γ (k+1) = γ (k) . Conversely, if d (k) approximation succeeds, we let γ (k+1) = γ (k) + d (k) and calculate the new trust domain radius r k+1 according to ρ k . The new starting point γ (k+1) and the trust region radius r k+1 are re-substituted into Equations (18) and (19). We repeat these steps until the result converges. For waveform registration, each waveform was shifted in time to locate the peak of the surface signal at the zero point. The mean and variance curves of the four bathymetric points were calculated as shown in Figure 11. Experiment I: Waveform Classification As depicted in Figure 11a, the bottom signal is most affected by the water depth, and its amplitude shows an obvious decrease with the increase in water depth. Some differences can also be found in the surface signals due to the change of the measurement time and position. However, the amplitudes of water column scattering are almost constant at different depths. The variance curves in Figure 11b show the difference in waveforms at the same water depth. It can be found that the surface and bottom signals are still changing even at a constant water depth. In contrast, the change in water column scattering is negligible. When the water depth is 30 m and the time offset is 10 ns, the variance quickly drops to 60, indicating that the waveform after this point is slightly affected by the surface signal. When the time offset is equal to 30 ns, the amplitude of the mean curves is below 5% of the intensity of the surface signal; this suggests that the shape of water column scattering in the received waveform is almost fixed in a small measured area, which can be used as a sign to classify the waveforms. Therefore, the mean curve with a time offset between 10 ns and 30 ns can be truncated as w C (see Section 2.3.2). Distribution of S. According to the width of the emitted signal, the received waveforms can be divided into three cases: • Case 1: Water column scattering is completely covered by the surface and bottom signals. • Case 2: The length of water column scattering that is not covered is less than the length of w C . • Case 3: The length of water column scattering that is not covered is greater than the length of w C . In this experiment, waveforms in different cases were selected and S is calculated according to Equations (2) and (3). As shown in Figure 12, the S values of the waveforms belonging to Case 1 are above 4500, while the S values in Case 3 are below 3500. Thus, the threshold T S can be set between 3500 and 4500. In this paper, T S is set to 4000. Reference Data The bathymetric environment is complex and changeable. Figure 13 shows the scope of Dataset 1. It can be seen that the edge of Dataset 1 is adjacent to the island's coastline, which is the most affected area by tides. Therefore, in accuracy analysis, the reference data must be acquired simultaneously with the field data. However, the bathymetric sonar is inefficient in shallow waters compared to ALB; it is difficult to ensure these two bathymetric methods are performed simultaneously. To assess the signal detection accuracy, signals detected from NIR waveforms were taken as references. Because the NIR signal does not penetrate into the water column, it can provide an accurate position of the water surface [9]. For shallow water, its waveform in the PMT1 channel has a strong bottom return. Errors occur only when a mixed peak is detected. Thus, the accuracy of surface signal detection can reflect the accuracy of the water depth. However, NIR signals are not always reliable (sometimes buried below the noise level due to strong absorption and sometimes saturated, see Figure 14). In this experiment, NIR signals are filtered according to the signal strength before being used as references. Surface Signal Detection To access the performance of the proposed algorithms in shallow water, the traditional maximum method was applied to analyze the effect of the adaptive threshold (see Section 2.4). Signals in waveforms processed by RLD are both detected by the maximum method with a fixed threshold and adaptive threshold. The corresponding point clouds are referred to as RLD_M and RLD_A. The fixed threshold is set to N T + 3N P . The Gaussian model proposed in [12] is used to process waveforms in very shallow water. To compare the applicability of the model, waveforms are both fitted with the Gaussian and EW model. Their initial values are calculated according to RLD_A, and the generated point clouds are referred to as GD (Gaussian decomposition) and EW. The point cloud only processed by the tradition maximum method is also presented for comparison (denoted by Max). The water surface points from all the point clouds are compared in height with the reference data (See Figure 15). Most of the inaccurate surface points are lower than the reference data, because the bottom signals are detected as surface signals by mistake or the peak of the surface signal is shifted backward due to the overlap with the bottom signal. The Max is most affected by this problem. Since RLD can improve the signal resolution, this problem has been significantly improved in RLD_M and RLD_A. EW shows a good consistency with the reference data, and most of the errors are within 0.15 m, indicating that the errors of the detected signal position are within one sampling interval of the waveform. Height of reference data (m) Figure 15. Comparison between the height of the water surface in reference data and detection results. (a) (b) Figure 15. Comparison between the height of the water surface in reference data and detection results. Figure 16 shows the water surface on the selected profile (see Figure 13) detected by the above algorithms (only when both surface and bottom signals are detected will the surface point be displayed in this figure). The water depths on this profile are shallower than 1 m. The height of the water surface should float at 0 m, just as with the reference data. The heights of most points in Max are between −1 m and −0.5, indicating that mixed peaks were detected. In RLD_M, more points have been detected, and the number of error points is significantly reduced, proving that RLD can effectively improve the signal resolution. Although the number of points in RLD_A is less than in RLD_M, most of the missing points are outliers. There are still many outliers in GD. Compared with the other results, the distribution of the points in EW is most similar to the reference data, and the difference is no more than 0.5 m. Table 2 provides the statistical parameters of the above results, where Dr represents the detection rate, which is defined as the percentage of surface points with an error less than 0.3 m; RMSE denotes the root mean square error of the surface points; min(d) corresponds to the detected minimum depth; and Std. means the standard deviation of the heights of the surface points. The detection rate of EW is 99.11%, which is the highest. The detection rate of RLD_M, RLD_A and GD is close, but the result of RLD_A is more reliable, which can be found by comparing RMSE. GD and EW have the minimum min(d), followed by RLD_M and RLD_A. Apart from the RMSE, Std. can also reflect the accuracy of signal detection. Since sea level changes very little in a small area, the smaller the Std. is, the more accurate the surface points are. In general, EW performs best, with the highest detection rate, minimum error and maximum bathymetric range. The reference data has a low detection rate (58.54%) and a small Std (0.0839), which indicates that NIR signals are undetectable sometimes but accurate. Height of reference data (m) Figure 15. Comparison between the height of the water surface in reference data and detection results. Adaptability Analysis of the EW Model To further study the effects of different models on waveform decomposition, three representative waveforms are selected from Dataset 1. The corresponding depth of the first waveform is more than 1 m, and the initial values provided by RLD_A are exact. The depths of the second and third waveforms are less than 1 m. For the second waveform, the initial values are rough, while for the third waveform, RLD_A only provides the mixed peak position. Figure 17 shows the fitting results. When the water depth is more than 1 m, the surface returns fitted by the two models are both correct, but the peak position of the surface return fitted by the EW model is closer to the reference data (see Figure 17a,d). When the water depth is less than 1 m, although the whole waveform is well fitted with the Gaussian model, the position of the surface return is not accurate (see Figure 17b,c). In contrast, the EW model is still applicable even in the absence of accurate initial values (see Figure 17e,f). In addition, it can be found that the fitting of water column scattering plays a very small role here, which also follows the practical situation, but it still influences the positions of the fitted surface return and bottom return. Reference Data Dataset 2 is located on a slope with depths ranging from 20 m to 40 m, as shown in Figure 18. This dataset is selected to test the ability of the signal detection algorithms in deep water. The detection of surface signals has been analyzed in shallow water, and for the waveforms in deep water, the difficulty of signal detection mainly lies in the detection of the weak bottom signal. Thus, this experiment focuses on the detection of bottom signals. Here, signals detected from the PMT3 channel are used as reference data. Because PMT3 has a larger field of view than PMT1, the intensity of the bottom return in PMT1 is weaker than that in PMT3 (see Figure 19b). In Dataset 2, the bottom signals are weak in PMT1, but still strong in PMT3. Therefore, we apply the proposed method to PMT1, and the bottom signals detected from PMT3 with the maximum detection method are used as reference data. However, waveforms in PMT3 are not applicable to all cases. When the water is shallow, PMT3 is affected by multiple reflection (see Figure 19a), and when the water depth reaches a certain value, the bottom return in PMT3 is also weak (see Figure 19c). Since the bottom reflection can be approximated as a non-directional diffuse scattering, there is no difference between the bottom return in PMT1 and PMT2. Therefore, the optimal strategy is to process the waveforms in PMT1 when the water is shallow and process the waveforms in PMT3 when the detection of the bottom signal in PMT1 has failed. However, it is still necessary to detect the weak bottom signal in PMT3 effectively. In Dataset 2, the bottom signals in PMT1 are weak, which can simulate the detection of the weak bottom signal in PMT3. Bottom Signal Detection In deep water, waveforms may have a weak bottom signal with an amplitude approximately equal to the noise level, which is possibly caused by a deep depth, a turbid water or a dark bottom. Therefore, besides the RLD, which is used to improve the signal resolution, a filtering algorithm which can de-noise the waveform but keep the bottom signal in the meantime is also effective. ASDF is used as a filtering algorithm in [23], which is a substitute for the direct cross-correlation function to estimate the time delay in two discrete time series and is more computationally convenient. ASDF_M and ASDF_A denote the point clouds generated from the signals detected by the maximum method with a fixed threshold and the adaptive threshold from the waveforms that are processed by ASDF. We also applied the method proposed in [25] which used a first derivative of a wide Gaussian filter to reduce the processing range of the waveform, and processed the waveforms in this range with normalization, multiple smoothing and three times derivation to increase the detection rate of the bottom signal. The point clouds generated from this method are referred to as dddNCFWF. For the waveform decomposition, since the Gaussian model is no longer applicable, the quadrilateral model (QUAD) introduced in [18] is applied here together with the EFSP model proposed in this paper. These models are initialized with RLD_A, and the corresponding point clouds are referred to as QUAD and EFSP. Before the performance analysis, all the point clouds are filtered according to neighboring points to eliminate outliers. Figure 20 shows the detected water bottom on the selected profile (see Figure 18). The number of the detected points in ASDF_M is nearly twice that in Max or RLD_M. However, when the thresholds in the maximum method are adaptive, more water bottom points can be detected. RLD_A performs better than ASDF_A, which is exactly the opposite of RLD_M and ASDF_M. The intensity of the bottom signal varies greatly with water depth, but the fixed threshold cannot take all kinds of situations into account. The detection rate of ASDF_M is higher than Max and RLD_M, while the detection rate of ASDF_A is almost equal to that of RLD_A. Because noise can be filtered by ASDF, the results indicate that the fixed threshold is more sensitive to the noise than the adaptive threshold. dddNCFWF provides a high detection rate just as RLD_A. However, there is an offset between it and the reference data which may be induced by the dissimilarity between the emitted waveform and the symmetrical filters used in the method. Even with RLD_A as the initial value, QUAD still cannot accurately fit the waveforms with the depths over 35 m. In contrast, the EFSP model is more flexible, but it seems that there is little difference between RLD_A and EFSP. In further comparing these algorithms, the above point clouds are statistically analyzed, as shown in Table 3; the detection rate (Dr) is the percentage of bottom points with an error less than sqrt(0.3 2+ (0.015 × d) 2 ) m (where d is the water depth); RMSE is the root mean square error of the bottom points; and max(d) denotes the detected maximum depth. Since there is an offset in dddNCFWF, its translation (dddNCFWF_T) that best matches the reference data is also considered in the following analysis. The detection rates of Max and RLD_M are low, while ASDF_M shows a higher detection rate. The detection rate is significantly improved in ASDF_A, RLD_A and dddNCFWF. The detection rate of dddNCFWF is almost equal to ASDF_A but lower than RLD_A, because the fake signals with relatively high amplitude can be detected in dddNCFWF sometimes. In contrast, RLD can effectively keep the signal which is similar to the emitted waveform and remove other noise even with relatively high amplitude. The detection rate of QUAD is lower than RLD_A, indicating that the detection rate is decreased after waveform decomposition with the quadrilateral model. EFSP can improve the detection results in RLD_A with the highest detection rate of 76.64%. RMSE reflects the accuracy of the detected points. ASDF_M has the minimum RMSE, but its detection rate is too low. dddNCFWF has the largest RMSE. Although the accuracy can be improved by translation, there is still a large RMSE in dddNCFWF_T. The distance of the offset may be related to the signal strength, so it cannot be completely eliminated by the translation. Errors are introduced by the quadrilateral model with the RMSE increased from 0.1347 m to 0.1504 m in QUAD. After waveform decomposition with the EFSP model, the RMSE is reduced from 0.1347 m to 0.1076 m, indicating that EFSP may correct the initial value even if there is some error in it. EFSP has a great max(d), exceeding the traditional maximum method by 10 m. In addition, dddNCFWF has the maximum max(d) among the tested methods, implying that it may have a better performance with an appropriate filter. Adaptability Analysis of the EFSP Model Since water depth is a key factor affecting the strength of the bottom signal, four waveforms in different depths are selected to test the adaptability of the model in waveform decomposition. The amplitudes of the bottom returns in these waveforms are 21.4, 3.2, 2.5, and 2 bins, respectively. For the first three waveforms, the initial values provided by RLD_A are correct, while the position of the bottom signal in RLD_A is wrong for the last waveform. The received waveforms and their fitting results are shown in Figure 21. It can be found that the quadrilateral model cannot fit water column scattering accurately compared with the EFSP model. When the bottom signal is fairly strong, the waveform decompositions with the two models can both converge to the exact position (see Figure 21a,b). With the decrease of bottom signal intensity, the inadaptability of the quadrilateral model becomes more and more obvious, even when the exact initial values are provided (see Figure 21c,e,g). In contrast, as long as the strengths of the bottom signals are sufficient, the locations of the bottom signals obtained with the EFSP model are precise regardless of the correctness of the initial values (see Figure 21d,f,h). Simulated Data To evaluate the accuracy of the surface and bottom signals, the water LiDAR waveform model (Wa-LiD) presented by Abdallah et al. [42] was applied in this experiment. Wa-LiD is a successful simulator for simulating green channel waveforms received from water, which has been widely used in ALB research [15,18,19]. It can perfectly reproduce the received waveforms by adjusting some realistic water parameters [42]. In this experiment, the Gaussian function in Wa-LiD was replaced by the calibration waveform to better fit the real received waveforms. As shown in Figure 22, the real waveforms can be well fitted by Wa-LiD with proper environmental parameters. However, there are some differences between the leading and falling edges of the signal, which may be induced by the slope of water surface and bottom. To assess the accuracy of the bathymetric signals both in shallow and deep water, we generated 10,000 simulated waveforms in which the depth varied from 0 to 2 m with an interval of 0.0002 m, and 10,000 waveforms in which the depth varied from 40 m to 50 m with an interval of 0.001 m. The environmental parameters of each simulated waveform were randomly selected within the normal range which was determined by the real waveforms. Signal Detection in Shallow Water For the simulated waveforms of shallow water (0-2 m), we test the waveform processing algorithms in Section 3.2.2. Since the accurate surface and bottom signal positions are known, the detection rate of surface (Dr_S) and bottom signal (Dr_B), and their RMSE (denoted by RMSE_S and RMSE_S, respectively) are assessed in Table 4. The accuracy of the surface signal is consistent with the field experiment results. EW still has the highest detection rate and accuracy. The Dr_S in simulated data is generally higher than that in field data, because the effect of waves on the water surface is not considered in the simulated waveforms. It is worth noting that the Dr_S of GD is higher than that of RLD_A, but the Dr_B of GD is significantly lower than that of RLD_A indicating that GD is not adaptive to the waveforms. Due to the changes of the environmental parameters, the Dr_S of RLD_A is lower than that of RLD_M. However, EW is still able to achieve the correct decomposition with inaccurate initial values, which depends on the appropriate model (EW) and a good optimization algorithm (TR). Signal Detection in Deep Water The waveform processing methods in Section 3.3.2 are evaluated with the simulated waveforms of deep water (40-50 m), and the statistical results are shown in Table 5. By comparing Tables 3 and 5, it can be found that the translation of dddNCFWF (dddNCFWF_T) performs better than EFSP here. Because the simulated waveform does not take into account the signal stretching caused by the slope of the water surface, the offset in dddNCFWF becomes a fixed value which can be eliminated by translation. It also proves that the error of dddNCFWF is mostly caused by the mismatch between the symmetric filter and the asymmetric emitted waveform. Apart from dddNCFWF_T, EFSP still has the best performance. Consistent with the results of the field experiment, the adaptive threshold significantly increases the Dr_B, and the Dr_B of RLD_A is still greater than that of ASDF_A or dddNCFWF, showing that RLD_A are the best initial values. Although EFSP does not increase the Dr_B of RLD_A, the RMSE_B of EFSP has been significantly improved. Even for the weak bottom signals in deep water, constructing a suitable model is also useful for improving the accuracy. Conclusions In this paper, a depth-adaptive waveform decomposition method for ALB was developed by classifying and fitting the waveforms in the green channel based on the similarities between them and the water column scattering. The application of the proposed method in shallow water datasets (where the depth is less than 2 m) and deep water datasets (with depths ranging from 20 m to 60 m) showed that this method can cope with most of the waveforms and significantly improves the accuracy of the detected signals. The main conclusions are as follows: 1. Water column scattering can be used as a sign to distinguish the received waveforms in terms of depth. The defined parameter S can be used to measure the similarity between the received waveforms and the water column scattering. Since water column scattering is covered in the shallow water waveform, the S of the shallow water waveform is obviously greater than that of the deep water waveform. Thus, waveforms can be classified precisely according to S. 2. For the waveform preprocessing, improving the signal resolution is more efficient than denoising. With an appropriate signal detection threshold, RLD always performs better than ASDF with a higher signal detection rate. Although filtering algorithms can remove the noise in signals and improve the accuracy of signal detection, the weak bottom signal may be filtered out as noise in the meantime. 3. The adaptive threshold can improve the reliability of the signal detection. The intensity of the bottom signal varies greatly with water depth, while the noise in water column scattering may be stronger than the bottom signal, leading to the detection of fake signals. Furthermore, although RLD is a deconvolution algorithm with good noise resistance, noise is inevitably introduced in the process. The adaptive threshold can better cope with the fake signals because it takes into account the effects of the water column. 4. With an appropriate model and reliable initial values, waveform decomposition can significantly improve the signal detection rate and accuracy. The proposed models, EW and EFSP, can fit the waveforms well in most cases. Compared with the Gaussian function, the transformation of the calibration waveform can better fit the water surface and bottom signals. The exponential function with a second-order polynomial is consistent with the shape of water column scattering in the waveform. The TR algorithm can solve the model parameters in a reasonable region and provide an accurate solution. The results of waveform decomposition are based on the whole waveform and are accurate to the sub-sampling interval. Even when the initial values are wrong, the detection results can be corrected by waveform decomposition in some cases. In addition, the processing time of waveform decomposition is long, meaning that whether the wave decomposition step should be added depends on the accuracy requirements in practical applications. The waveform decomposition model proposed in this paper is for the Mapper5000 system and may need relevant adjustments when applied to waveforms acquired by other ALB systems. The applicability of the model is very important in waveform decomposition. In shallow water, due to the random fitting of water column scattering, a bias between the surface return fitted by EW and the surface signal detected from the NIR channel can be found. For future research, it would be useful to study the fitting of water column scattering in extremely shallow water. Using physical models rather than empirical models to fit water column scattering may be a breakthrough in this field. (a) (b) (c) Figure A2. (a) The rotating scanner of Mapper5000 and (b) the spot trajectory on sea surface. The angle between the normal of the reflecting mirror and the axis of rotation is 7.5°, resulting in an elliptical spot trajectory on the sea surface [44]. (c) The mechanism of the refraction correction for Mapper5000. The ray path of the green laser in air and water are denoted by green and blue line, respectively. Figure A1. Schematic diagram of optical system in the Mapper5000 system [44]. IFOV: instantaneous field of view; APD: avalanche photodiode; PMT: photomultiplier tube; PBS: polarization beam splitter. The "Laser" contains a green laser and an NIR laser. The IFOV of the receiver is separated into two parts: a small center IFOV (6 mrad) and a large edge IFOV (6-40 mrad). The green light obtained from the small IFOV is split by a PBS and is received by PMT1 and PMT2. Appendix B As shown in Figure A2a, the scanner in the Mapper5000 system contains a rotating mirror which is controlled by a motor. The angle between the laser emission direction and the axis of rotation is 45 • , and the angle between the normal of the reflecting mirror and the axis of rotation is 7.5 • . As the mirror rotates, the laser will create an elliptical trajectory on the sea surface (see Figure A2b). In this paper, the point clouds are generated by a data processing software which is designed for Mapper5000. After extracting the bathymetric signals from the waveform, a data file is generated containing the surface signal position t S , bottom signal position t B , the emitted signal position t 0 , the encoder data of the scanner and the GPS time of each waveform. The data file and the POS data are input into this software, and the point cloud data can be generated. The refraction correction is also completed in this software, the principle is as follows. Figure A1. Schematic diagram of optical system in the Mapper5000 system [44]. IFOV: instantaneous field of view; APD: avalanche photodiode; PMT: photomultiplier tube; PBS: polarization beam splitter. The "Laser" contains a green laser and an NIR laser. The IFOV of the receiver is separated into two parts: a small center IFOV (6 mrad) and a large edge IFOV (6-40 mrad). The green light obtained from the small IFOV is split by a PBS and is received by PMT1 and PMT2. (a) (b) (c) Figure A2. (a) The rotating scanner of Mapper5000 and (b) the spot trajectory on sea surface. The angle between the normal of the reflecting mirror and the axis of rotation is 7.5°, resulting in an elliptical spot trajectory on the sea surface [44]. (c) The mechanism of the refraction correction for Mapper5000. The ray path of the green laser in air and water are denoted by green and blue line, respectively. [44]. (c) The mechanism of the refraction correction for Mapper5000. The ray path of the green laser in air and water are denoted by green and blue line, respectively. As shown in Figure A2c, a right-handed coordinate system is established with the laser as the origin, the flight direction as the positive direction of the Y-axis, and vertical upward direction as the
Modified nodal stage of esophageal cancer based on the evaluation of the hazard rate of the negative and positive lymph node Background The study aimed to propose a modified N stage of esophageal cancer (EC) on the basis of the number of positive lymph node (PLN) and the number of negative lymph node (NLN) simultaneously. Method Data from 13,491 patients with EC registered in the SEER database were reviewed. The parameters related to prognosis were investigated using a Cox proportional hazards regression model. A modified N stage was proposed based on the cut-off number of the re-adjusted ratio of the number of PLN (numberPLN) to the number of NLN (numberNLN), which were derived from the comparison of the hazard rate (HR) of numberPLN and numberNLN. The modified N stage was confirmed using the cross-validation method with the training and validation cohort, and it was also compared to the N stage from the American Joint Committee on Cancer (AJCC) staging system (7th edition) using Receiver Operating Characteristic (ROC) curve analysis. Results The numberPLN on prognosis was 1.042, while numberNLN was 0.968. The modified N stage was defined as follows: N1 stage: the ratio range was from 0 to 0.21; N2 stage: more than 0.21, but no more than 0.48; N3 stage: more than 0.48. The log-rank test indicated that significant survival differences were confirmed among the N1, N2 and N3 sub-groups of patients in the training population. The difference of all the patients using the modified N stage method were more significant than AJCC N stage. The result of ROC analysis indicated that the modified N stage could represent the N stage of EC more accurately. Conclusion The modified N stage based on the re-adjusted ratio of numberPLN to numberNLN can evaluate tumor stage more accurately than the traditional N stage. Background Eesophageal cancer (EC) is a fatal disease with a poor prognosis [1]. Lymph node (LN) metastasis usually occurs in the beginning of diagnosis, and accurate evaluation of the tumor stage is a key step in determining post-operation treatment [2]. However, at present, the definition of N stage is controversial. The N stage has typically been defined by the American Joint Committee on Cancer (AJCC) as the number of positive lymph node (PLN),but a new N stage was proposed by the Japanese Society for Esophageal Diseases (JSED) [3,4]. The JSED N stage is defined according to the site of PLN. The site of PLN has been demonstrated to play an important role in the prognosis of patients with EC, while the key role of number of PLN on the prognosis of patients with EC was repeatedly confirmed and widely received by researchers [5][6][7]. Furthermore, recent research revealed that the site of PLN was weaker than the number of PLN ( number PLN) in the multiple-parameters analysis using a survival model of EC [8]. Nevertheless, neither of them considered the influence of the number of negative lymph node (NLN) on the prognosis of patients with EC. Greenstein first proposed the impact of the number of NLN ( number NLN) on the outcome of patients with EC [9]. He suggested that the higher the NLN resected in during surgery would be associated with better postoperative outcome for patients. Hsu confirmed the above identification again in EC [10]. In another study, number NLN was included in a scoring system for determining the prognosis of EC [11]. In other words, it was accepted that number NLN counted in the operation could increase the accuracy of identifying the N stage of AJCC. It was also inferred that number NLN could represent site information for tumor metastasis to some extent. Because of the significant impact number PLN and NLN has on the prognosis of patients with EC, a modified N stage that consists of both PLN and NLN might provide a more accurate representation of the extent of tumor metastasis in the regional LN station. However, it might not be accurate to define the modified N stage using the ratio of number PLN and NLN directly. This study investigates the feasibility of a modified N stage which is based on a combination analysis including the number of positive LN and negative LN in the meantime. The combined analysis refer to the result of the Cox proportional hazard regression model. Data source The Surveillance, Epidemiology, and End Results (SEER) program of the National Cancer Institute is a comprehensive source of population-based cancer information in the United States. The SEER database collects disease incidence, patient treatments, and survival data from population-based cancer registries covering around 28% of the country's population. SEER data comes primarily from hospital medical records as well as records from outpatient surgical, pathology, and radiology centers. The routine data collected in SEER database includes detailed information on demographics, diagnosis, and tumor characteristics. The work team engaged in active follow-up on the cases included in the SEER. Inclusion criteria for patients This study reviewed patient information collected from 2004 to 2011. Data was downloaded using the SEER*Stat software (8.3.5, The Surveillance Research Program of the Division of Cancer Control and Population Sciences, National Cancer Institute.). The inclusion criteria in this study are as follows: 1) All patients should have experienced radical lymphadenectomy; 2) The LN number collected during the operation was clear; 3) EC is one of the specific causes of patient death. A wide range of patient information was obtained from the SEER database. More specifically, the following variables and covariates were collected for this study: age, gender, tumor size, tumor extension, regional nodes positive, regional nodes negative, race (white, black and other), primary site of tumor (cervical segment, chest segment, abdominal segment and cross-section?), grade classification of tumor (I, II,III, and IV), AJCC Group (I, II,III, and IV), radiation sequence with surgery (no radiation and/or cancer-directed surgery, radiation prior to surgery, radiation after surgery, radiation before and after surgery, surgery both before and after radiation, sequence unknown, but both were given), tumor metastasis to bone, tumor metastasis to brain, tumor metastasis to liver, tumor metastasis to lung, the survival time and the status of patients. Statistical analysis The total population was divided into two groups using a random number table. One group was the training population, and the other was the validation population. The cross-validation method was used between the training population and the validation population. Cox proportional regression model was used to build a prediction function for time event data. The prediction function including the HR of PLN and NLN provided the coefficient to calculate the re-adjusted number of PLN and NLN for proposing the modified N stage of EC. The cutoff number for the ratio of the PLN count to NLN count was investigated using the method of the minimum of P values. This was performed using the software X-tile (2.0, University of Chicago). Differences in survival rates between subgroups categorized by N stage were analyzed using the Kaplan-Meier analysis and log-rank test. Receiver Operating Characteristic (ROC) Curve Analysis was used to investigate whether the modified N stage proposed by this study was more effective than the previous N stage definition. All analyses were performed using IBM SPSS version 21.0(SPSS Inc. Chicago, Illinois, USA). Continuous variables were presented as the mean ± standard deviation (SD) or when the data exhibited a skewed distribution, as the median and interquartile range (IQR). P values of 0.05(two-tailed) were established as the threshold for statistical significance. Baseline characteristics and outcomes The data for around 100,000 Patients with EC were reviewed through SEER statistical software, but only the medical records of 13,491 patients were collected under the inclusion criteria. The 13,491 patients with EC were classified into two groups according to the random number method. The two groups were the training population (n = 6698) and the validation population (n = 6793). The mean age of the total population was 66.70 ± 11.12 years, and 10,776(79.9%) of the patients were male. The average tumor size was 366.58 ± 179.92. The proportions of white, black, and other races were 84.9, 10.1, 4.7%, respectively. Follow-up data revealed that 5327(39.5%) of patients survived, and 8164(60.5%) of patients had died. Average survival time was 11.31 ± 11.35 months. In the comparison of the results from the training population with that from the validation population, no significant differences were observed according to sex, age, tumor site, tumor size, the organ metastasis, number PLN and NLN ( Table 1). The parameters identified in the results of cox proportional Hazard regression analysis The univariate analysis revealed that sex, race, age, tumor site, tumor size, tumor length, pathological grade of tumor, AJCC stage, the post-operation treatment, the distance of metastasis, and number NLN were all independent prognostic factors. The result of the multivariate analysis demonstrated that number PLN was also an independent prognostic factor in addition to the above parameters ( Table 2). The modified N stage proposed in this study According to the results of the Cox proportional hazard model, the HR of PLN was 1.064, and the HR of NLN was 0.962. The distance between the HR of PLN and the statistical standard point (which was 1) was 0.064 (ΔHP positive ), and the distance between the HR of NLN and the statistical point was 0.038 (ΔHP negative ). The ratio of ΔHP positive to ΔHP negative (N ratio) was used as a coefficient to produce the re-adjusted number of PLN and NLN. The analysis result of the minimum P value method indicated that the following ranges for the modified N stage were an appropriate solution: forN0 stage, the re-adjusted N ratio = 0; for N1stage, the re-adjusted N ratio = (0-0.08]; for N2 stage, the range of rate was (0.08-0.63]; for N3,the re-adjusted N ratio (0.63,+∞] In order to calculate the ratio of the re-adjusted number of PLN to NLN in special situations, the authors settled on the following two definitions of the ratio: 1 When number PLN and NLN were both 0, the ratio of the readjusted number of PLN to NLN (readjusted N ratio) was defined as 0; 2 When one of number PLN and number NLN is 0, 0 was defined as 0.0001. The feasibility and superiority of the modified N stage A cross-validation study was performed on the modified N stage. The modified N stage was developed from the training population and validated using the validation population. The log-rank test indicated that significant survival differences were confirmed among the N1, N2 and N3 sub-groups of patients in the training population, and the survival difference could be replicated in the validation population using the Kaplan-Meier analysis (P < 0.05, Fig. 1). The log-rank test indicated that significant survival differences were confirmed among the N1, N2 and N3 subgroups of all patients, and the difference of all the patients using the modified N stage method were more significant than AJCC N stage (Fig. 2). The result of ROC analysis revealed that the area under AJCC N stage curve was 0.934, and the area under modified N stage curve was 0.956, which indicated that the modified N stage could represent the N stage of EC more accurately (Fig. 3). Discussion Because the parameters of the tumor metastasis site may not be stable, patients with EC might be not benefit from the extensive lymphadenectomy [12,13]. The site would very likely be affected by the extent of operation, whereas number PLN collected in lymphadenectomy would not be. Because the PLN was usually enlarged, the surgeon was likely to notice it during lymphadenectomy and would remove it. This means number PLN would remain stable regardless of the individual extent of lymphadenectomy and the practice of different surgeons. Research has indicated that the NLN number could represent the extent of lymphadenectomy in patients with EC, therefore, more NLN removed meant a better prognosis [14]. However, it was controversial how many NLN should be removed in the lymphadenectomy to achieve a better prognosis. Greenstein advised that 18 NLN should be removed in the lymphadenectomy to obtain a better outcome for patients with EC, while 19 NLN was suggested by another study [9,10]. Baba Fig. 1 The comparison of overall survival between training group and validation group. a The survival difference among N1, N2 and N3 sub group in training data were significant(P < 0.05). b The survival difference among N1, N2 and N3 sub group in validation data were significant(P < 0.05) Fig. 2 The comparison of survival analysis between the method using N stage of AJCC and the modified N stage, respectively. a: The survival analysis on all the patients using the method of N stage coming from AJCC tumor stage system. b: The survival analysis on all the patients using the modified N stage method suggested that 31 NLN should be resected at least in the lymphadenectomy, however, this study found that 31 NLN should be resected only on patients who experienced the three-field dissection for the lymphadenectomy [15]. This finding indicated that number NLN was an important factor in the prognosis of patients with EC. However, the result of the above studies on the advised resected number of NLN was slightly different. The reason for this might be that number NLN was easily influenced by the confounders than others. Although number NLN could indicate the extent of lymphadenectomy and could reflect the site of tumor metastasis to some extent, a stable cut-off number of NLN removed in lymphadenectomy indicating a better prognosis were not replicated in this study. The ratio of number PLN to number NLN or the ratio of number PLN to the total number of LN removed in lymphadenectomy could be used to explore the cut-off number which differentiates patients into sub groups with different outcome. This cut-off number could be used for proposing the modified N stage. Dhar et al first reported the ratio of number PLN to the total number of LN as a prognostic factor in EC in 2000 [16]. Mariette et al showed that the ratio of number PLN to the total number of LN was a strong independent prognosis factor [17]. The above studies demonstrated that the ratio number PLN to the total number of LN was as important as number PLN regardless of the extent of the lymphadenectomy and the application of neoadjuvant chemoradiation. However, what the best cut-off number was for the ratio of number PLN to the total number of LN remained controversial. Several studies proposed 0.2 as the cut-off number for the ratio in their modified N stage regimens, while other studies concluded that the cut-off number for the ratio should be 0.3 [18][19][20][21]. In the meantime, Tan suggested that 0.25 might be a more appropriate cut-off number for the ratio than 0.35, which was identified in Shao's research [22,23]. The above results indicated that the ratio of number PLN to the total number of LN failed to consistently predict the prognosis for patients with EC. The reason might be that none of the above research compared the relative impact of PLN, NLN, and total LN removed in lymphadenectomy on the general prognosis, but they simply used the ratio between them to explore the modified N stage. The criterion of the above-modified N stage would be affected by the research cohort or the proportion of patients. This study proposed the cut-off ratio of the PLN count to the NLN count based on the results of the Cox proportional hazard model. The procedure in this study was more reasonable than those procedures which directly explored the ratio of the PLN count to the NLN count. Furthermore, the cut-off ratio proposed in this study has been further confirmed using the cross-validation method on cohort data from the SEER database. The N stage introduced by the 7th AJCC was a regular criterion. The priority of its in predicting prognoses was usually selected to be compared by procedures of modified N stage recently. A cross-validation study was performed on the modified N stage. The modified N stage was developed from the training population and validated using the validation population, and the survival difference could be replicated in the validation population using the Kaplan-Meier analysis, and the difference of all the patients using the modified N stage method were more significant than AJCC N stage. The survival analysis in this study confirmed that the survival line of subgroups from the modified N stage separated more significantly than that of the N stage of 7th AJCC. Furthermore, the comparison between the modified N stage and the AJCC N stage was performed using the ROC method. The result of the ROC curve analysis demonstrated the superiority of this modified N stage system, which in turn supported the assumption of this study: the relative impact of PLN and NLN on the prognosis based on the results of the Cox proportional hazard model should be considered in the modified N stage. It was widely accepted that tumor differentiation, number PLN and NLN, the tumor stage of 7th AJCC, and organ metastasis were all independent prognostic factors [24,25]. This study confirmed that finding. Studies also showed that the age of patients was also a prognostic factor [26,27]. This study confirmed the finding as well. However, based on the analysis result of the Cox proportional hazard model, age only had a small impact on a patient's prognosis. This result implied that the impact of age on prognosis would only be noticed when using a big cohort. This finding was consistent with our previous research [28]. A recent report revealed that Patients with EC with organ metastasis or distant metastasis in bone had the worst prognosis than others [5]. In this study, patients with tumor metastasis in bone had a worse outcome than those with tumor metastasis in the lungs or liver, but had a better outcome than patients with tumor metastasis in the brain. This finding was consistent with the report which showed that brain metastatic tumors with a primary tumor located in the esophagus only had a mean survival time of six months [29]. Although the current study proposed a reasonable modified N stage for EC, it has several limitations. First, the study was retrospective; its results may be affected by confounding factors that were not controlled for. Second, although the SEER database was prepared according to strict criteria, the data was collected from multiple research centers with different operational habits. As a result, some differences in findings may be due to differences in research center practices. Third, the sample in this study consisted of different pathological types of EC. Because the data did not include which type of EC, this study could not determine whether there was a difference between the application of this modified N stage system or in the ESCC and SCC sub-cohorts. In summary, based on the results of the Cox proportional hazard model, the study proposed a modified N stage derive from the N stage system of 7th AJCC for EC. The study also identified the reasonability and superiority of the modified N stage using the crossvalidation method comp -arising to the N stage system of 7th AJCC. This modified N stage system is a promising step toward more accurately identifying the N stage of EC and in turn, providing more effective treatment for this devastating disease. Conclusions The modified N stage based on the re-adjusted ratio of number PLN to number NLN can evaluate tumor stage more accurately than the traditional N stage.
Modeling Teacher Supports Toward Self-Directed Language Learning Beyond the Classroom: Technology Acceptance and Technological Self-Efficacy as Mediators This study explored the contributions of teacher supports toward students’ self-directed language learning beyond the classroom and investigated whether technology acceptance and technological self-efficacy could be the mediators between teacher supports and students’ self-directed language learning in a sample of Chinese undergraduate students. A total of 197 freshmen students in one university in Eastern China participated in the questionnaires concerning teacher supports, technology acceptance, technological self-efficacy and self-directed language learning. The study highlighted the results: (1) perceived usefulness mediated the relationship between teacher affective supports and students’ self-directed language learning as well as the relationship between teacher capacity supports and students’ self-directed language learning; (2) technological self-efficacy mediated the relationship between teacher affective supports and students’ self-directed language learning as well as the relationship between teacher behavior supports and students’ self-directed language learning; and (3) perceived easy of use had no noticeable mediating functions, but exerted an indirect influence on students’ self-directed language learning. These findings extended previous researches by considering both the external factors (i.e., teacher supports) and the internal factors (i.e., technology acceptance and technological self-efficacy) of influencing students’ self-directed language learning, thereby contributing to enhancing our understanding of the joint drive of the inherent and extrinsic power mechanisms. This study indicated the significance of elevating teachers’ awareness of the substantial supports in enhancing students’ self-directed language learning beyond the classroom and would inform that the future research on teachers’ compliance in relation to technology use be converted from institutional mandates into teachers’ conscientious behaviors. INTRODUCTION Technology with its fast-moving pace has pervaded the educational aspects in recent years (Garrison and Akyol, 2009;Hung et al., 2010), thus enabling students' self-initiated, selfconstructed, and self-monitored learning experiences in a newlyconstructed technology-based ecology of language learning (Lai and Gu, 2011;Reinders and White, 2011). Online learning, E-learning, M-learning and other informal technological learning approaches provide students with more chances to explore selfdirected learning ways (King and He, 2006;Zandi et al., 2014;Hsu, 2016;Huang et al., 2020;Pan, 2020). However, in spite of the booming attention and development on technological teaching approaches in educational landscapes, the enthusiasm and motivation of students to conduct technology-based self-directed language learning need further exploring (Lai et al., 2018). Furthermore, although the technology has become ubiquitous and demonstrated varieties of advantages, how it exerts its strengths and facilitates students' self-initiated use of technology for language learning is still a sophisticated problem (Chen, 2018;Huang et al., 2019a). Thus, an increasing number of scholars are arguing for the need to provide learners with external support to enhance effective use of technology for language learning (Cohen and White, 2008;Hubbard and Romeo, 2012;Lai et al., 2016). Jeyaraj et al. (2006) found that school factors such as teachers' influence on technology adoption decisions significantly affected students' technology-based selfdirected learning. According to Huang et al. (2019b), teachers in China are considered as superiors and vital roles in supervising students' learning, as China is recognized by its collectivist culture where hierarchy is highly appreciated (Hofstede, 2008). Researchers also found that students could increase the frequency of self-initiated use of technology for language learning as a consequence of teachers' active encouragement and suggestions (Deepwell and Malik, 2008;Lai et al., 2016). Carson and Mynard (2012) further identified that some categories of teacher supports such as pedagogical suggestions, curriculum expectancy contributed to more favorable perception of use of technology and led to greater awareness of language learning potentials. As Lai et al. (2017) pointed out, "given the myriad of ways in which teachers shape language learners' perceptions of and self-directed use of technology, it is critical to understand how these different types of teacher behaviors interact with other psychosocial factors to influence language learners' self-directed use of technology for learning outside the classroom" (p. 1107). Research evidence has built up in support of teachers' supervising behaviors in facilitating students' willingness to study beyond the classroom (Hagger and Chatzisarantis, 2012). Therefore, teacher supports constitute a multitude of cognitive and non-cognitive functions for stimulating students' self-directed language learning. In addition to the enhanced external factors that affect students' technology-based self-directed language learning, various psychological and sociocultural factors that could influence students' adoption of technological resources for language learning were explored (Bailly, 2011;Lai et al., 2016). For instance, the study of Mew and Honey (2010) indicated that technological learning motivation significantly influences students' intention to use online learning websites, technologyrelated facilities and their personal technology application. Among the widely used, multidimensional constructs of perceived behavioral control, technological self-efficacy was considered as the dominant determinant of the intention of using the technology (Teo, 2009;Teo and van Schaik, 2012). However, despite some researches being conducted from either external or internal perspectives, there are still few studies to investigate the influence from both the internal and external factors on students' self-directed language learning. Therefore, this study aimed to explore how the external factors (i.e., teacher supports) influenced students' self-directed language learning and whether students' internal factors (i.e., technology acceptance and technological self-efficacy) would mediate the relationship between teacher supports and students' self-directed language learning. The present study's main contribution lies in enhancing our understanding of the potential roles that teachers could play in supporting students' self-directed use of technology for learning outside the classroom and the joint drive of the inherent and extrinsic power mechanisms. LITERATURE REVIEW Technology Acceptance Model Davis (1989) proposed the technology acceptance model (TAM) on the basis of the theory of reasoned action (TRA) raised by Fishbein and Ajzen (1975). "The Technology Acceptance Model (TAM) has been found to be efficient in explaining user behavior across a broad range of end-user computing technologies and user populations" (Teo, 2011(Teo, , p. 2433). In the TAM, Davis (1989) identified perceived usefulness (PU) and perceived easy of use (PEU) to be the antecedent variables to affect individual's intentions and behaviors to use technology, as individual's behavior intention is posted to be affected by the direct and indirect effects of PU and PEU. Perceived usefulness (PU) manifested learners' expected overall outcome of technology adoption, whereas perceived easy of use (PEU) dominantly pertained to those impacts associated with the process of using technology (Teo, 2011). Perceived usefulness was consistently considered to be the most robust predictor of students' technology adoption for learning intentions (Yousafzai et al., 2007;Teo, 2011Teo, , 2015. Previous research also found that students' preference and tendency to conduct technology-based learning was determined by their perception of the potential usefulness of technological resources (Clark et al., 2009;Lai and Gu, 2011). Therefore, perceived usefulness was involved as a significant predictor in our hypothesized model. Additionally, in response to Davis's (1989) conforming perceived ease of use as an antecedent of perceived usefulness, the associations between the two have been further explored. For instance, perceived easy of use (PEU) was examined to have a positive effect on perceived usefulness (PU) (Liaw and Huang, 2003;Teo, 2009;Wong et al., 2012). In the TAM model, "these two constructs influence the user's Attitude toward using the system (AT), which in its turn influences the Behavioral Intention to use the system (BI), which determines at the endpoint the actual system use where people use the technology" (Papakostas et al., 2021, p. 2). The new integrated TAM model proposed by Venkatesh and Bala (2008) takes into consideration user's general beliefs (i.e., perceptions of external control, technological self-efficacy) about computer applications. In recent years, the TAM has been widely utilized in many other areas such as economy and pedagogy. In the educational landscape, there are lots of empirical researches to connect pedagogical support for the use of TAM. For instance, Liaw et al. (2007) found out that students' technology acceptance is the key factors for technology-based learning. Hsieh et al. (2017) also proposed that the technology acceptance was the prerequisite for students to learn knowledge via using technology. Besides, there are further studies exploring the associations between learner's technology acceptance and other factors such as self-efficacy (Cho and Kim, 2013). Lai (2015) investigated the relationship between internal variables of technology acceptance and learner's intention to use technology in language learning context. Currently, TAM has been identified as a stable and parsimonious theoretical model for applications in educational contexts, such as mobile game-based learning as a solution in COVID-19 era , social networking-based learning , the integration of Augmented Reality (AR) in course training (Papakostas et al., 2021), the digital learning technologies (Sprenger and Schwaninger, 2021), and language teachers' adoption of educational technology (Sun and Mei, 2020). However, although there are a lot of studies to explore the technology acceptance model and connect this model with educational issues, there are still few researches to further investigate how learners' technology acceptance influence language learning in mainland China and whether technology acceptance can be the mediating variable to influence students' self-directed language learning. Teacher Supports Teachers significantly shape the quality of students' learning experiences by affecting students' cognitive, affective and social learning behaviors (Farmer et al., 2011). As a significant social agent, teachers play a critical role in helping students develop autonomy of technology-based language learning beyond class (Reinders and Darasawang, 2012). Knowles (1989) defined self-directed learning as "a process in which individuals take the initiative, with or without the help from others, in diagnosing their learning needs, formulating goals, identifying human and material resources, choosing and implementing appropriate learning strategies and evaluating learning outcomes"(p. 18). Extant literature has indeed approached self-directed learning from the perspectives of the personal attribute (e.g., individuals' propensity, willingness and capacity to conduct learning behaviors; Garrison, 1997), the process (Salleh et al., 2019), and the context (Song and Hill, 2007). In light of these particular research lines, the function of teacher supports should be manifested in helping students to be academically, professionally and psychologically empowered, motivating students' personal attribute, and facilitating students' selfinitiated use of technological resources to autonomously clutch the reins of self-directed learning process. According to Fagerlund (2012), in-class technological instructions and supports conducted by teachers will be learned and continued by students outside the classroom. Based on students' exposure to engaging learning experience and environments, Lai and Gu (2011) found that it was more possible for students to use the technologies that teacher had used in class. Accordingly, both the quantity and quality of students' autonomous use of technology to learn language are deeply influenced by teachers' opinions and behaviors (e.g., Arbaugh, 2000;Margaryan and Littlejohn, 2010;Imlawi et al., 2015;Hao et al., 2017). Carson and Mynard (2012) identified different teacher supports that facilitated students self-directed language learning: (1) by raising students' technological awareness through expounding the advantages of technology in language learning; (2) by offering technological resources/strategies to help students slash the difficulties of discovering useful resources online; and (3) by organizing varieties of technological activities to activate students technological interests. Researches have reported that the guidance and support from teachers drove students' engagement in technology-based self-directed language learning (Ertmer et al., 2012), helped students incorporate learning resources/activities into their learning ecology (Lai et al., 2014), and facilitated students to utilize technology as learning tools (McLoughlin and Lee, 2010). Due to different characteristics and functions of teacher supports, researchers resorted to the classification of teacher supports so as to definitely depict the associations of teacher supports and students' learning behaviors. Three categories of teacher supports of technology were posited, respectively, teacher affective supports (Carson and Mynard, 2012;Lai, 2015), teacher behavior supports (Deepwell and Malik, 2008;Gray et al., 2010;Fagerlund, 2012;Lai, 2013) and teacher capacity supports (Fagerlund, 2012;Lai, 2015). Teacher affective supports (TAS) mainly refer to teacher behaviors which can provide students with the basic knowledge of the strengths of technology as well as the encouragement of using technology in language learning (Xia and Lee, 2000). Teacher behavior supports (TBS) involve teachers' capacities of organizations and management that can help students participate in activities and tasks involving technologies (Ertmer, 2005). Teacher capacity supports (TCS) mainly help students to get some useful technological resources and tell them how to select and use technological resources effectively (Gallivan et al., 2005). The current literature abounds in discussions on the impact of teacher supports on promoting students' language learning. However, the internal mechanism between teacher supports and students' technology-based self-directed language learning beyond the classroom needs to be further explored. Technological Self-Efficacy Bandura's (1997) notion of self-efficacy highlighted how one individual's self-regulatory process influence his or her behavior, and thereby self-directed learning manifested the degree to which students are "metacognitively, motivationally, and behaviorally active participants in their own learning process" (Zimmerman, 1986, p. 308). Researchers for decades have been conducting studies in understanding the especially important role that self-efficacy plays in connection with self-directed learning. Research evidence shows that self-efficacy has strong relationship with one's expectations and interests in learning, including the enhancement of one's confidence (Zuffianò et al., 2013), the improvement of the degree of one's efforts on tasks (Abali Ozturk and Sahin, 2015), and the perceived responsibility for learning (Kitsantas and Zimmerman, 2009). Further, research result indicated that students with higher self-efficacy demonstrated a higher volley of inspirations and motivations than lower selfefficacy students, and tended to spend more time on their studies (Bassi et al., 2007). In the study of Zuffianò et al. (2013), self-efficacy has academically been viewed to possess the function to allow students to experience the feeling of worth and confidence which can contribute to students' better learning performance. In the wake of network technology, the phenomenon of combining the self-efficacy with technology is triggered. Based on the concept of self-efficacy, technological self-efficacy mainly refers to one's perception of his or her capacities to use technology-connected tools or resources to conduct and finish some tasks (Keengwe, 2007). Among the key motivation constructs associated with students' technology adoption for self-directed learning, technological self-efficacy is identified as the important factor that affects one's use of technology (Yesilyurt et al., 2016). In this study, technological self-efficacy is characterized as students' perception of their capabilities to utilize technology-related tools and sites to conduct learning behaviors so as to achieve intended learning outcome (Bandura, 1997;Keengwe, 2007). Researchers have verified a significant positive influence of technological self-efficacy on technology acceptance and utilization (Celik and Yesilyurt, 2013) and regarded technological self-efficacy as a proxy of individuals' control beliefs in technology use (Venkatesh and Davis, 1996). Researchers have also found that technological selfefficacy significantly affects students' behavioral preferences to use technological tools and their perceptions of the usefulness of technology for learning (Keengwe, 2007;Mew and Honey, 2010). More specifically, in learning process, technological self-efficacy constitutes a significant psychological homeostasis that students utilize to help develop their habits of using technology and their perceptions of the usefulness of technology in learning (Keengwe, 2007;Mew and Honey, 2010). Therefore, technological self-efficacy noticeably affects students' technologyrelated language learning behaviors. As one of the students' causal attributions regarding their technology-based self-directed learning, this variable of technological self-efficacy should be considered and examined, especially in the technology-correlated educational context. While a burgeoning research on self-directed language learning, self-efficacy, and sources of self-efficacy has been conducted (Sundqvist, 2011;Su et al., 2018), there remains a lack of research examining both the external and internal variables which influence students' awareness and perceptions of technology acceptance and technological self-efficacy in cultivating and enhancing their behaviors of self-directed language learning. Based on the previous researches, teachers as the important social agents have the irreplaceable roles in directing and facilitating students' self-directed language learning (Davis, 2003). Therefore, this study aims to connect more variables (e.g., teacher support, technology acceptance, technological self-efficacy) to explore their respective influence on students' self-directed language learning and the potential relationships. Problem Statement and Hypothesis Informed by recent new visions in the study of technology-based self-directed language learning discussed above, two research questions are specified below: Question 1: How do teacher supports contribute to students' technology-based self-directed language learning beyond the classroom? Question 2: Will technology acceptance and technological self-efficacy mediate this relationship? Thereby, this research aimed to test the following nine hypotheses (Figure 1): H1: Teacher affective support (TAS) correlates with students' self-directed language learning (SDLL) via perceived usefulness (PU). Participants and Procedure Participants were freshmen students at a large comprehensive university in Eastern China who were taking compulsive college Frontiers in Psychology | www.frontiersin.org English courses at the time of this study. Currently, the advanced network technology has been applied in college English teaching and learning in accordance with the innovation of college English course. Totally 201 freshmen students voluntarily participated in the survey and were told of their rights to decline. Questionnaires were distributed to the participants on the spot at the class interval of college English course and collected immediately after completion. After consent, participants were briefed several measures of the questionnaire and completed anonymously within 10 min. A total of 197 valid questionnaires were retained after discarding incomplete questionnaires. Of the valid participants, 71 were males (36%) and 126 were females (64%), with an average age of 19 (SD = 1.45). Noticeably, the equipped network technology in this university is accessible to all students, offering them good facilities to independently conduct self-directed language learning beyond the classroom. Hence, the experiences of the participants' technology adoption for self-directed language learning are representative of what most language learners in this university would experience. Measures The questionnaire items which were adapted and validated from various published sources were used to assess teacher supports (Lai, 2015), technology acceptance (Davis, 1989;Moore and Benbasat, 1991;Taylor and Todd, 1995;Venkatesh and Davis, 2000;Ajzen, 2002), technological self-efficacy (Keengwe, 2007;Celik and Yesilyurt, 2013) and self-directed language learning (Jansen and Janssen, 2017) respectively. Each questionnaire item was measured on a 5-point Likert Scale, ranging from 1 (strongly disagree) to 5 (strongly agree). Higher scores indicated higher perceptions of teacher supports, technology acceptance, technological self-efficacy and self-directed language learning. Teacher Supports Teacher supports were measured in three scales: teacher affective supports (four items, e.g., My English teacher encourages us to use technology for language learning outside the classroom), teacher behavior supports (four items, e.g., My English teacher assigns assignments that are based on or involve the use of online resources or tools), and teacher capacity supports (four items, e.g., My English teacher shares with us useful technological resources/sites/tools for language learning outside the classroom). The Cronbach alpha values of teacher affective supports, teacher behavior supports and teacher capacity supports are 0.916, 0.89, and 0.888, and Kaiser-Meyer-Olkin (KMO) values for validity are 0.846, 0.797, and 0.803, respectively, indicating a good reliability and validity. Technology Acceptance Technology acceptance was measured using two scales: perceived usefulness (five items, e.g., I find technologies useful in language learning) and perceived ease of use (five items, e.g., I find it easy to select and find appropriate technological tools needed to enhance language learning). As the Cronbach alpha values of perceived usefulness and perceived ease of use are 0.915 and 0.857, respectively, and Kaiser-Meyer-Olkin (KMO) values for validity are 0.886 and 0.748, the scale has a good reliability and validity. Technological Self-Efficacy Students' technological self-efficacy included five items, e.g., I have the confidence to be proficient in using technology when learning English independently. The Cronbach alpha value of technological self-efficacy is 0.909 and Kaiser-Meyer-Olkin (KMO) value for validity is 0.852, indicating that the scale has a good reliability and validity. Students' Self-Directed Language Learning Students' self-directed language learning included four items, e.g., I like self-directed English learning outside the classroom. The Cronbach alpha value of students' self-directed language learning is 0.917 and Kaiser-Meyer-Olkin (KMO) value for validity is 0.853, thereby indicating that the scale has a good reliability and validity. Method of Data Analysis Firstly, SPSS21.0 was used to analyze the reliability and the validity of each variable (i.e., teacher supports, technology acceptance, technological self-efficacy and students' self-directed language learning). Secondly, data were analyzed using structural equation modeling (SEM) via Amos 21.0, including the examining of measurement model and the structural part of the SEM (Teo et al., 2016). Following the recommendations by Hu and Bentler (1999), the model fit was tested by using several goodness-off it indexes, including the ratio of the chi-square to its degrees of freedom (X 2 /df), RMSEA, SRMR, CFI, and TLI. By Hair et al. (2010), values of X 2 /df (<3), CFI (>0.90), TLI (>0.90), RMSEA (<0.08) and SRMR (<0.08) are reflective of a good fit. In addition, the significance of the mediation effects was assessed using the bias-corrected percentile bootstrap method (Hayes, 2013), computing the confidence interval (CI) for the mediated effect. When zero is not in the CI, it indicates the significance of the indirect effect. Descriptive Results and Correlations As is shown in Table 1, the mean values of 7 variables varied from 3.27 to 3.88, indicating participants' positive response to the variables in the questionnaire. The standard deviations varied from 0.78 to 1.01, which was indicative of a narrow spread of participants' responses. Tables 2, 3 present that all the measures had good reliabilities (Cronbach's alpha ranged from 0.857 to 0.917). Pearson correlation matrices for the relations between variables show that there were noticeable correlations among the study variables. As shown in Table 2, TCS and TAS had a relatively high correlation (r = 0.826), so collinearity variance inflation factors (VIFs) were calculated to examine potential multicollinearity problems. The VIF scores ranged between 1.832 and 4.361 (all < 5), which indicated that the estimation of the regression coefficients would not be affected by multicollinearity problems (Montgomery et al., 2001). Measurement Model Confirmatory factor analysis (CFA) was conducted to assess the fitness of this measurement model. Firstly, to assess the discriminant validity, the square root of AVE for each construct was tested. "If the square root of the AVE of a construct was greater than the off-diagonal elements in the corresponding rows and columns, this suggests that a construct is more strongly correlated with its indicators than with the other constructs in the model thus suggesting the presence of discriminant validity" (Teo, 2011(Teo, , p. 2436. Table 2 demonstrated that this measurement model established the discriminant validity, as the square root of AVE (shown in parentheses along the diagonal) of each construct is higher (0.740-0.859) than corresponding correlation values for that variable in all cases. Secondly, the convergent validity of the measurement model was tested by examining the reliability of each item through its factor loading and assessing the construct reliability by the Cronbach's alpha, and average variance extracted (AVE), t-value (C. R. > 2) and S. E. value (>0) of parameter estimation. Teo and van Schaik (2012) suggested the standardized factor loadings exceed 0.7, and average variance extracted (AVE) by each construct exceed 0.50. By these criteria, Table 3 indicated good convergent validity of this measurement model. In addition, Table 3 indicated that, except for PEU1, the standardized factor loadings for all the study constructs exceeded the minimum of 0.70, suggesting good construct validity. PEU1 was not excluded from further analysis because it was statistically significant. Path Analysis Testing the Hypothesized Model Grounded on the previous researches (e.g., Hu and Bentler, 1999;Marsh et al., 2004), this study examined the model fit using the root mean square error of approximation (RMSEA), the standardized root mean square residual (SRMR), comparative fit index (CFI), and the Tucker-Lewis index (TLI). A good model is indicated by RMSEA < 0.08, SRMR < 0.06, and CFI and TLI > 0.90. As is shown in Table 4, the unrevised model didn't significantly satisfy the fitting standard values. According to the modification indices in AMOS 21.0, The M. I. values of the paths of technological self-efficacy (SE)→perceived easy of use (PEU), perceived easy of use (PEU)→technological selfefficacy (SE) and teacher behavior supports (TBS)→self-directed language learning (SDLL) are 43.801, 33.864 and 10.78,indicating that a better model can be established by adding these three paths. Therefore, after adding the three paths, the modified structural model (Figure 2) yielded a better fit (X 2 /df = 2.616 < 3, GFI = 0.908 > 0.90, CFI = 0.991 > 0.90, RMSEA = 0.079 < 0.08, SRMR = 0.0154 < 0.06). According to Table 5, except teacher capacity supports (TCS)→technological self-efficacy (SE), teacher affective supports (TAS)→perceived usefulness (PU), teacher capacity supports (TCS) → perceived usefulness (PU), teacher behavior supports (TBS) → perceived usefulness (PU), the standardized path coefficient of the other paths is not close to or greater than 1, and the parameter estimation SE value is greater than 0, indicating that the parameters of the structural Table 6 presented that TAS → PU → SDLL, TAS → SE → SDLL and TCS → PU → SDLL had total mediating effects. In addition, TBS → SE → SDLL had partial mediating effects. As such, it can be found that perceived usefulness (PU) and technological self-efficacy (SE) mediated the relationship between teacher supports and self-directed language learning (SDLL) with a statistically significant 95% confidence interval (CI) values. According to the guidelines by Cohen (1988), effect sizes with values less than 0.1 are considered small, those with Table 6 indicated statistically significance and accorded with the guidelines by Cohen (1988) with small to medium (0.079 < 0.178) indirect effect values. DISCUSSION Currently, technology is increasingly utilized in Chinese classrooms. This increase in technology access lessens external barriers known as first-order barriers (Kopcha, 2012). Previous studies which have primarily focused on teachers' technological integration into pedagogical instructions found that technology access does not automatically equate to high efficiency of technology usage (Ertmer and Ottenbreit-Leftwich, 2010) and that teachers are still limited in facilitating students' technologybased learning (Buabeng-Andoh, 2012). Thereby, the role value on teachers' internalization of external barriers and externalization of personal beliefs for technology integration was highlighted (Vongkulluksn et al., 2018), and more importantly, "it is essential that we not only focus on what teachers could do with technologies inside the classroom but also explore how teachers could help maximize the potentials of technology for learning by enhancing the quantity and quality of learner self-directed use of technology for learning outside the classroom" (Lai, 2015, p. 80). The present study investigated the mediating roles of internal factors linking teachers' various supports to students' selfdirected language learning. Specifically, the study constructed a multiple mediation model to examine the mediating roles of perceived usefulness, perceived easy of use and technological self-efficacy in the associations between teacher supports and students' self-directed language learning. The results demonstrated that three categories of teacher supports influenced the development of mediating factors (i.e., perceived usefulness, perceived easy of use and technological self-efficacy) which subsequently linked to students' self-directed language learning. These findings extended previous researches by considering both the internal factors (i.e., perceived usefulness, perceived easy of use and technological self-efficacy) and external factors (i.e., teacher supports) of influencing students' self-directed language learning. Consistent with previous study, the path analysis revealed that teacher supports, especially teacher behavior supports, were directly associated with students' self-directed language learning (Lai, 2015). Specifically, students who perceive teachers' behavior supports such as teachers' encouragement to use technology resources tend to conduct more self-directed language learning beyond the classroom. The path analysis of this study also indicated that the other two teacher supports (i.e., teacher affective supports and teacher capacity supports) didn't exert the direct influence on but affected students' self-directed language learning as the mediating factors. Affective support and capacity support such as encouragement, the recommendations of learning resources and the instructions of metacognitive strategies are the responsibilities of teachers that have been found to be largely unaware of Toffoli and Sockett (2015). Additionally, the multiple mediating model indicated how the teacher supports indirectly influenced students' self-directed language learning through the mediating role of students' perceived usefulness, perceived easy of use and technological self-efficacy. Thus, an implication of the results for professional development initiatives is that teacher supports need to be highlighted as: (1) undertaking teachers' responsibilities of facilitating students' willingness and capacities for technology-based learning in and beyond the classroom; (2) providing scaffolding mechanisms of supporting students' self-directed use of technology for learning outside the classroom (Reinders, 2010); and (3) incarnating teachers' capacity of maximizing the potentials of technology for education. The Mediating Role of Technology Acceptance The results of this study revealed that perceived usefulness mediated the relationship between teacher supports and students' self-directed language learning. According to the mediating paths, perceived usefulness totally mediated the relationship between teacher affective supports and students' self-directed language learning as well as the relationship between teacher capacity supports and students' self-directed language learning. The former mediating path coincided with the previous study which considered verbal persuasion or affective support as an important antecedent to induce people's behavioral changes through their positive attitudinal changes (Petty and Cacioppo, 1986). Lai (2015) identified that teacher affective supports could predict self-directed technology use by improving students' perceived usefulness. Therefore, teachers' affective supports such as oral persuasion and encouragement can induce students' behavioral changes such as changing their self-directed language learning behaviors through students' positive attitudinal changes (e.g., their perceived usefulness). The latter mediating path revealed that perceived usefulness had the mediating influence on the relationship between teacher capacity supports and students' self-directed language learning, which is also accordance with the previous study (Lai, 2015). Additionally, the mediating role of perceived usefulness between teacher capacity support and students' self-directed language learning corroborated previous studies concerning the role value of teacher capacity by: (1) improving students' awareness of usefulness for the behavior (Xia and Lee, 2000); (2) strengthening students' willingness to use variety of and potentials of technological resources to learn the language outside the classroom (Gamble et al., 2012); and (3) facilitating students to learn language more positively and independently after class (Lai, 2015). For instance, teachers' capacity behaviors such as providing in locating, selecting and using appropriate technological resources had indirect influence on their behaviors by changing students' behavioral intentions (McLoughlin and Lee, 2010;Lai et al., 2016). Students may enhance their perceived usefulness of technology and increase the frequency of self-directed language learning outside the classroom on the condition that teachers offer students capacity supports. The results of this study found no noticeable mediating functions of perceived easy of use between teacher supports and students' self-directed language learning and supported the hypotheses by previous researches (e.g., Davis, 1989;Venkatesh et al., 2003;Lai et al., 2012) which confirmed that perceived easy of use didn't have direct effects on user's behavioral intention but indirectly influenced user's intentions to use technology through perceived usefulness. Students' self-directed language learning can be facilitated by directly enhancing their perceived usefulness and indirectly strengthening their perceived easy of use. In this study, according to the path analysis of teacher affective supports to perceived easy of use, teacher affective supports strongly promoted perceived easy of use, which verified the previous study that teachers' verbal persuasion or oral encouragement had the positive influence on students' perceived easy of use because teachers are the critical pedagogical examples and agents that shape students' awareness to use technology (Reinders, 2010;Toffoli and Sockett, 2015). Thus, this study echoed the view of Teo and Noyes (2014) that "technology providers ensured the ease of use of media which are targeted at teaching and learning in order to attract more educational users" (p. 62). The Mediating Role of Technological Self-Efficacy The study also documented that both teacher affective supports and teacher behavioral supports could relate to students' self-directed language learning through the mediating role of technological self-efficacy. According to Henry (2013), perceived social support had a significant influence on technological outcome expectations and interests. What's more, technological self-efficacy had strong associations with technological expectations and interests (Sheu et al., 2010). Fishbein et al. (1980) conceptualized that verbal persuasion or affective support was the fundamental element for individuals' attitudinal beliefs and confidence. Technological self-efficacy which standards for individuals' intentions and beliefs to use technology was considered as the vital determinant of the behavior of using the technology (Hall and Higgins, 2005;Ma et al., 2005;Teo, 2009;Teo and van Schaik, 2012;Moftakhari, 2013). On a similar note, Scherer et al. (2019) held that an individual is more likely to use technology if he/she has higher technological self-efficacy. To be specific, students with teacher affective supports may have stronger attitudinal beliefs and technological self-efficacy. Moreover, they may better cope with their self-directed language learning outside the classroom. Besides, in this study, the results indicated that teacher behavioral supports indirectly influenced students' self-directed language learning through the mediating role of technological self-efficacy. This finding is consistent with Ertmer (2005) in that teacher behavioral supports could give students opportunities to observe the teacher and other peers on how to use technology to assist language learning. Lai (2015) pointed out that teacher behavior supports could give students ideas of possible useful technological resources in self-directed language learning. By absorbing and learning teachers' or peers' technological experiences, it is possible for students to enhance their confidence and technological self-efficacy which may exert advantageous influence on learning language independently beyond the classroom. Thus, it might be useful if, in the teaching process, teachers offer affective or behavioral supports to help build up students' technological self-efficacy. Implications and Limitations The present study theoretically established a solid foundation for the compensatory model of teacher supports toward selfdirected language learning based on the university students' sample of primarily highly engaged language learners and covered the internal factors of their technology acceptance and technological self-efficacy. From a practical point of view, the implications of this study can be depicted as follows: (1) enhancing a deeper understanding of students' utilization of technology for language learning; (2) serving as a useful guidance on the development of intervention programs where teachers could optimize their potential roles in supporting students' technology-based self-directed language learning beyond the classroom; and (3) reducing the number of obstacles posed in online learning by shifting students' maladaptive obsessive engagement to self-determined engagement through teacher supports and the stimulation of students' psychological factors. Despite this study adopted rigorous procedures, there existed limitations. First of all, the results of this study were grounded on a comparatively small sample, which may give rise to a potential bias that will affect the degree to which these results are generalizable. The future study may entail involving a larger sample to include different types of student participants. Secondly, the simplex cross-sectional design being applied in this study may result in a common method bias. Hence, it is suggested that future study adopt multi-layered, multidimensional methods (e.g., the combination of cross-sectional design with longitudinal research) to enhance our understanding of the causality as far as possible. CONCLUSION This study aimed to explore the contributions of teacher supports to students' self-directed language learning and investigate whether three variables (i.e., perceived usefulness, perceived easy of use and technological self-efficacy) mediated these associations. The findings of this study indicated that teacher supports influenced students' self-directed language learning mainly through perceived usefulness and technological selfefficacy while perceived easy of use had indirect mediating functions by directly influencing perceived usefulness. Thus, there was evidence from this study to suggest the significance of elevating teachers' awareness of the substantial supports in establishing and enhancing students' perceived easy of use, perceived usefulness and technological self-efficacy so that students are highly motivated to conduct self-directed language learning beyond the classroom. This study also suggested that the improvement of students' technology-based self-directed language learning may be most feasible by promoting beneficial harmonious engagement through teacher supports and the stimulation of students' psychological factors. Against the background of technology integration redefining teacher-student interactions in the teaching landscapes, the results of this study would inform that the future research on teachers' compliance in relation to technology use be converted from institutional mandates into teachers' conscientious behaviors. AUTHOR'S NOTE XP is an associate professor in Xingzhi College, Zhejiang Normal University, Jinhua, China. His research interests are English Language teachers' technology use for professional development and students' learning, intercultural English education and educational psychology. His publications have appeared in International Journal of Computer-assisted Language Learning and Teaching, and Social Behavior and Personality, and Frontiers in Psychology. WC is a postgraduate student in College of Foreign Languages, Zhejiang Normal University, Jinhua, China. Her research interests are second language acquisition and technology-based self-directed learning. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s. ETHICS STATEMENT Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. AUTHOR CONTRIBUTIONS XP conceived, designed, and executed the study, collected the data, wrote the manuscript, and revised the final version of the manuscript. WC analyzed the data and participated in writing and revising the manuscript. Both authors have read and approved the submitted version.
Investigating the relationship between the quality of life and religious coping in mothers of children with recurrence leukemia Background: Leukemia is a life‐threatening chronic disease for children. The recurrence of the disease causes tension and reduces the quality of life for the family, especially for mothers. Religion is an important humanitarian aspect of holistic care that can be very effective in determining the health level of the patient and the family members. The present study aims at investigating the role of religious coping (RCOPE) in the quality of life for mothers of children with recurrent leukemia. Methods: This is a cross‐sectional study of the descriptive‐correlational type. Two‐hundred mothers with children aging 1–15 years suffering from leukemia were selected using a continuous sampling method. The data were collected using questionnaires eliciting information about personal information, Persian version of the Caregiver Quality of Life Index‐Cancer, and RCOPE. The collected data were analyzed in SPSS using descriptive tests and independent samples t‐test. Results: The result of examining the relation between life quality and demographic features of mothers showed that education level, income, and occupation had a significant statistical relationship with general quality of life mothers. The results of examining the relationship between quality of life and RCOPE of mothers showed that RCOPE was positively correlated only with the positive coping dimension quality of life (P < 0/001). Negative RCOPE had a significant reverse statistical correlation with general quality of life and all its aspects. Conclusion: The quality of life for the participants in this study was significantly related to RCOPE. Mothers with negative RCOPE faced low scores for quality of life, and religious support can improve their life quality. Further longitudinal studies are required to investigate the effects of establishing support communities. Introduction Cancer is one of the most serious diseases of childhood which is both highly prevalent and has a great effect on the lives of the suffering children and their families. [1] Based on the statistics released by The World Health Organization, 100 children in every one million children suffer from cancer. In a study conducted by Mousavi et al., leukemia was reported to be the most common type of cancer in children aging 0-14 years in Iran, and its frequency was estimated to be 8-62 cases in every one million children in 2004 in different parts of the country. Although cancer is a rare disease among children under the age of 15 in comparison with adults, it deserves special attention and investigation because of its seriousness. [2] Recent breakthroughs in treating leukemia using chemotherapy or combined therapies have increased recovery rate from 97% to 99%, and long-term survival has reached 80%. Despite the high rate of recovery, recurrence is possible in 20% of cases. [3] Cancer recurrence is defined as the recurring of the disease after a period of recovery. [4] Treating patients with recurring cancer is much more difficult than patients without relapse of the disease, and survival rate for patients with recurring cancer is significantly lower in comparison with the other group. [5] Cancer has a serious effect not only on patients but also on people who take care of them. [6] Families of the patients, as the most important and most readily available source of support and providing care, experience a lot of tension and stress, which can leave harmful effects on the quality of their lives. [7,8] In children with recurring cancer, these harmful effects are more severe and leave more serious effects on their quality of life. [9] Recurrence is an unpleasant experience for the family and survivors of the patients because they have to face the psychological-social effects of cancer including uncertainty, helplessness, and worries about death, again. [10] Diagnosing cancer recurrence requires reorganizing the family to deal with their fear and their situation regarding starting a new treatment protocol. [10,11] As the caregivers are trying to overcome the difficulties of the treatment, they are threatened by treatment risks and uncertainty about prognosis and the possibility of death regarding the recurrence of the disease. It seems that not being able to have plans for their future creates a feeling of failure in them. [12] When the caregivers fail to stop the suffering of the child and the development of the disease, a lack of a sense of control is felt, and they experience a sense of shame due to the gap between their wants and desires on the one hand and the reality on the other hand. [13] The psychological effects on the family members become more prominent when the recurrence is diagnosed and they have to decide overtreatment. The role of the health team as a mediator in providing support becomes more vital in dealing with such critical situations. [10,14] Studies have shown that caregivers of patients suffering from cancer experience high levels of stress, depression, tiredness, hopelessness, fear, guilt, sleep disorders, and social isolation. [15,16] In the case of children with blood cancer, their mothers have been reported to have lower levels of mental health and higher levels of depression and stress than their fathers. [17] Furthermore, the quality of life for parents of children suffering from cancer has been reported to be low in comparison with other people. [18,19] Quality of life is a multidimensional construct, which includes physical, emotional, and social welfare aspects of people's life. [20] Since quality of life is a multidimensional concept, some researchers include spirituality and religion in their study of life together with the other aspects mentioned earlier. [21] The spiritual aspect is explained and interpreted as the need for meaning, goal, integrity of life, hope, beliefs, and fate. Therefore, this spiritual aspect is of great importance in achieving a sense of integrity and in the quality of life. Spirituality-religion is an effective factor in increasing people's capacity to accept, improving people's quality of life, and decreasing hopelessness in dying patients. [22] Religion, as an important aspect of humanity in holistic care, improves health and creates a sense of fitness. Since religious beliefs are so important in the health of the patients and their family members and in self-care behaviors, raising awareness and knowledge of religious beliefs is of great importance. [23] Religious coping (RCOPE) is an important strategy for dealing with stressful situations. [24] The concept of RCOPE refers to using cognitive behavioral techniques that help a person cope with difficult and stressful situations in life, [25] which is a multidimensional concept and can leave both positive and negative effects. [26] Therefore, not all RCOPE techniques are useful and not all of them result in adjustment. It is conceived that positive RCOPE is accompanied by the benefits of psychological adjustment such as "solving personal problems with assistance from god," while negative RCOPE results in worse consequences and is considered "believing in a punisher god." [27] In several studies, RCOPE has consistently been shown to leave positive effects on the life quality of many patients suffering from chronic diseases such as breast cancer, [28] chronic pain, [29] cancer, [30] the human immunodeficiency virus, [31,32] end-stage renal disease, [33,34] and epilepsy. [35] Furthermore, Zamanian et al. investigated 101 patients suffering from cancer and found out that religion and spirituality were positively related to quality of life and mental health. [36] Furthermore, religion has been reported to be related to the coping behavior of mothers of children suffering from cancer, and religion has been considered as a defensive-protective system for adjusting to the crises caused by cancer. [37] In Iran, a relationship between life quality and negative RCOPE has been reported for the main caregiver in the family of children with physical disability. [38] Based on what has been said so far, the quality of life can be related to religion. Despite the fact that, religion is considered to be an important source of adjustment for caregivers, and some studies have been done on caregivers of children with cancer; no study so far has investigated the relationship between life quality and RCOPE in caregivers of children with recurrent leukemia in Islamic countries like Iran. This study was conducted to investigate the relationship between life quality and RCOPE in the mothers of children with recurrent leukemia. Methods The present study is a cross-sectional one of the descriptive-correlational type which was conducted from January 2016 to March 2017 in selected hospitals affiliated with the universities of medical sciences in Tehran (Hazrat-e Ali Asghar Hospital, The Children's Medical Center, Mofid Hospital, and Mahak Hospital). The population of the study included all the mothers of children aging 1-15 years with leukemia who had experienced the recurrence of the disease at least one time and who visited the blood clinics of the above-mentioned hospitals. Two hundred qualified mothers entered the study by assuming a correlation coefficient of 0.20 between life quality and RCOPE in mothers of children with leukemia who had experienced the recurrence of the disease at least one time, a confidence interval of 95%, and a test power of 80%. The sampling method was continuous in a way that the researcher visited the clinics of the hospitals mentioned above on different days of the week and selected the qualified mothers to enter the study as the research unit. The criteria for mothers to enter the study were: being able to read and write in Farsi and having a child aging 1-15 years suffering from leukemia with at least one experience of the recurrence of the disease, as confirmed by the Physician and stated by the mother. Mothers with psychological and neurological disorders or other identified chronic diseases were excluded from the study. Questionnaires about demographic information, Persian version of the Caregiver Quality of Life Index-Cancer (CQOLC-P), and RCOPE were used to gather the data. The demographic information questionnaire included items on age, education level, number of children, income level, and the occupation for the mother, and also age, sex, the time when the disease started, the time when the disease first recurred, the number of recurrences, and the types of treatments received for the child. The CQOLC questionnaire was developed by Weitzner et al. in 1997 in the United States, and the Cronbach's alpha for it was calculated to be 0.91 in a study conducted by Weitzner in 1999. [39,40] This questionnaire was translated into Farsi by Khanjari et al., and it was validated based on a construct and content validity method. The Cronbach's alpha for the questionnaire was calculated to be 0.89 in Iran. The CQOLC-P questionnaire includes 35 items. It consists of 4 areas, each area containing a number of items. Fourteen items are in the area of mental and physical suffering, nine items about the breakdown of lifestyle, eight items about positive coping, three items about economic worries, and one item about willingness to cooperate in taking care of a family member with cancer. This last item was not categorized in any of the other four areas, and the score for this item was included in the calculation of the total score. [41] The items were responded to in a 5-point Likert scale. The points ranged from 0 to 4: strongly disagree (0), disagree (1), neither agree nor disagree (2), agree (3), and strongly agree (4). The highest possible score was 140, and a high score indicates a quality of life. [39] The RCOPE questionnaire [3] includes 14 questions, consisting of two dimensions of positive RCOPE (7 questions) and negative RCOPE (7 questions), which was developed by Pargament et al. [42] The reliability and validity of this questionnaire in Iran were assessed by Rouhani et al., and the Cronbach's alpha coefficient was reported to be 0.86 for the positive coping dimension and 0.87 for the negative coping dimension. [43] To observe ethical considerations, after getting the required licenses from the research committee of the university, an informed written consent was obtained from all the mothers who participated in the study, and they were assured that the information about them would be kept confidential, and they were informed that they could withdraw from the study at any time they wanted to. This study is recorded in the ethics committee of Iran University of Medical Sciences under the ethics code IR.IUMS.REC.1394.9211196240. The data were analyzed in SPSS version 16 (SPSS, Chicago, IL, USA) using descriptive statistics, independent samples t-test, one-way analysis of variance, correlation coefficient, and multiple regression, and P < 0.05 was considered to be significant. Results The demographic information for the mothers and children investigated in the study T he mean ag e of the investig ated mothers was 34.38 ± 5.65 years (the youngest 22 and the oldest 55 years), and the mean age of the children with leukemia was 6.65 ± 3.82 years (the minimum age was 1 and the maximum age was 15). Of all the 200 children, 125 children were boys (62.5%) and 75 children were girls (37.5%). The average length of having leukemia for the children was 4.60 ± 3.39 years. One hundred and seventy-nine of the children (89.5%) had experienced at least one recurrence of the disease [ Table 1]. The mean scores for quality of life and religious coping The mean score for the total life quality score for mothers of children with recurrent leukemia was 61.25 ± 14.98, and the score for all aspects of life quality, which included the areas of mental and physical suffering, the breakdown of lifestyle, and economic worries, was less than half of the highest possible score, except for the area of positive coping (20.85 ± 4.05) [ Table 2]. The results of the study also showed that the score for the area of the positive RCOPE of the investigated mothers (21.87) was more than half of the highest possible score [ Table 2]. Relationship between quality of life and demographic information of the investigated mothers and children The results of the study on the relationship between quality of life and demographic information of mothers showed that the education level (P < 0.05), income (P < 0.001), and occupation (P < 0.001) had a significant relationship with the general quality of life of mothers. However, there was no statistically significant relationship between the quality of life score and other demographic information. Parallel comparison showed that quality of life in mothers with university education was significantly higher than other mothers in the study. Furthermore, quality of life in mothers with a adequate income was higher than other mothers. Housewives had lower quality of life than employees. The relationship between quality of life and religious coping The results of the study on the relationship between quality of life and RCOPE in mothers of children with leukemia showed that RCOPE was positively correlated only with the positive coping dimension quality of life (P < 0.001), and negative RCOPE had income, and occupation and negative RCOPE with the quality of life of mothers. Housewives had lower quality of life than employees (B = 7/73). Furthermore, mothers with an adequate income level (B = 13.94) had quality of life higher than those who said they did not have enough income. The results also showed that by increasing the unit of quality of life, the negative RCOPE would be reduced as much as 1/536 [ Table 4]. Discussion The results of this study showed that the mean score for the general quality of life for mothers of children with recurrent leukemia was less than half of the total score, indicating that they do not live a very satisfactory life. The quality of life for the mothers investigated in this study was low in comparison with the quality of life reported for caregivers (father, mother, brother, sister, and children) in a family in other studies conducted in Iran, Turkey, and Taiwan. [44][45][46] The results also showed that the quality of life for mothers of children with recurrent leukemia was lower than mothers of children with leukemia. [19] One reason for this might be that taking care of a child with recurrent leukemia is extremely exhausting, and the hope of recovering from the disease decreases. Diagnosing the recurrence of the disease requires that the family be reorganized to be able to deal with their fear and their situation regarding starting a new treatment protocol. [10,11] Recurrence of the disease in children is an unpleasant experience for their families because the family, especially the mother, has to face the psychological-social effects of cancer such as uncertainty, hopelessness, and worrying about the death of the child, again. [10] The investigation of different dimensions of quality of life in this study showed that economic worries and then mental and physical suffering scored the lowest. This finding can be an indicator of serious economic worries on the part of caregiver mothers and also of high levels of their suffering, which is more severe for mothers of children with recurrent leukemia than mothers of children without recurrent leukemia. [47] This finding demonstrates the importance of full psychological and social support for these mother so that they can take care of their children in a better way. The results also showed that the quality of life for employed mothers with adequate income was higher than the other mothers, a fact related to decreased economic worries, which is an important aspect quality of life, and a strong predictor of a higher quality of life. A study in Taiwan reported a positive relationship between higher family income and a higher quality of life for parents of children suffering from cancer. [48] In a study in Iran, a significant relationship was reported between quality of life and the parents' income and their occupational status. [19] Furthermore, in a study by Tang titled "the quality of life for the caregiver member of families with children suffering from cancer in Taiwan," the quality of life for caregiver mothers was significantly related to their level of university education. [46] significant reverse correlation with overall quality of life and all its aspects [ Table 3]. The results of the linear regression showed that there was a significant relationship between the variables in the regression, However, a study by Litzelman on the quality of life for parents of children suffering from leukemia and brain tumors showed that higher levels of education in parents were accompanied by lower levels of life quality because parents with higher levels of education prefer to actively participate in the process of making decisions about the treatment of their children, and this increases their stress, which leaves a negative effect on their quality of life. [49] The correlation between the scores for quality of life and negative RCOPE showed that mothers of children with recurrent leukemia deal with tensions and stresses by negative religious behaviors. This negative RCOPE can be related to their lower quality of life, and the results of the linear regression showed that negative coping can be a predictor of life quality of mothers of children with recurrent leukemia. The results of our study are in line with the results found in Ljungman et al. on mothers of children with cancer. [37] The results from Vallurupalli et al. on patients suffering from advanced cancer [50] and Tarakeshwar et al. on patients suffering from advanced cancer [51] showed that using more negative RCOPE results in lowered life quality. The results of this study showed a correlation between the scores of quality of life and positive RCOPE indicating that positive RCOPE is associated with increased quality of life only in terms of positive coping dimension. In positive RCOPE, the person deals with negative incidences of his or her life (like having a child with cancer) through positive changes attributed to help from God and tries to follow a purposeful life with the help from God and improve his or her relationship with God through religious institutions. [52] Furthermore, Ljungman (2014) study showed a very strong relationship between religion and coping behavior of mothers of children with cancer, and religion is considered to be a defensive-protective system for coping with the crises associated with having cancer. [37] In another study, the researchers arrived at the conclusion that spirituality increases the capacity for acceptance and decreases hopelessness in patients. [22] However, the results of a study by Ursaru et al. (2014) on patients suffering from breast cancer showed that using RCOPE mechanisms was not correlated with quality of life. [53] The discrepancies between these results and the results of the present study can be attributed to cultural differences and also the cases investigated in the studies; the above-mentioned study was conducted on patients suffering from cancer themselves, but the present study was conducted on the mothers of patients with cancer. The results of this study demonstrated the importance of the effects of RCOPE on the quality of life for mothers of children with recurrent leukemia. Positive RCOPE is a predictor of a higher life quality, and negative RCOPE is a predictor of a lower quality of life. In this study, negative RCOPE was a stronger predictor of quality of life than positive RCOPE. One of the major limitations of this study was the high levels of distress experienced by the parents, which could have affected their responses to the questions. Another limitation is the cross-sectional nature of the study. Given that the investigated sample in this study consisted of mothers of children with Conclusion The results showed that quality of life mothers of children with recurrent leukemia was significantly related to RCOPE, in a way that mothers with negative RCOPE were exposed to the risk of having a less-than-average quality of life. Religious support can help improve their quality of life. It is essential for doctors and nurses to take religious considerations into account as one of the most important aspects of treatment and care for improving the quality of life caregivers of patients with leukemia. More longitudinal studies are needed to investigate the effects of establishing support communities. Acknowledgment This article was part of a master's thesis in nursing at Iran University of Medical Sciences approved by the research committee of the university. At the end, I would like to extend my sincere thanks to the nursing staff of Hazrat-e Ali Asghar, Mofid, and Mahak Hospitals and the Children's Medical Center and all the mothers who participated in this study for their kind cooperation. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
53BP2S, Interacting with Insulin Receptor Substrates, Modulates Insulin Signaling* It is well known that insulin receptor substrates (IRS) act as a mediator for signal transduction of insulin, insulin-like growth factors, and several cytokines. To identify proteins that interact with IRS and modulate IRS-mediated signals, we performed yeast two-hybrid screening with IRS-1 as bait. Out of 109 cDNA-positive clones identified from a human placental cDNA library, two clones encoded 53BP2, p53-binding protein 2 (53BP2S), a short form splicing variant of the apoptosis-stimulating protein of p53 that possesses Src homology region 3 domain, and ankyrin repeats domain, and had been reported to interact with p53, Bcl-2, and NF-κB. Interaction of 53BP2S with IRS-1 was confirmed by glutathione S-transferase pull-down and co-immunoprecipitation assays in COS-7 cells and 3T3-L1 adipocytes. The Src homology region 3 domain and ankyrin repeats domain of 53BP2S were responsible for its interaction with IRS-1, whereas the phosphotyrosine binding domain and a central domain (amino acid residues 750-861) of IRS-1 were required for its interaction with 53BP2S. In CHO-C400 cells, expression of 53BP2S reduced insulin-stimulated IRS-1 tyrosine phosphorylation with a concomitant enhancement of IRS-2 tyrosine phosphorylation. In addition, the amount of the phosphatidylinositol 3-kinase regulatory p85 subunit associated with tyrosine-phosphorylated proteins, and activation of Akt was enhanced by 53BP2S expression. Although 53BP2S also enhanced Akt activation in 3T3-L1 adipocytes, insulin-induced glucose transporter 4 translocation was markedly inhibited in accordance with reduction of insulin-induced AS160 phosphorylation. Together these data demonstrate that 53BP2S interacts and modulates the insulin signals mediated by IRSs. It is well established that insulin and IGFs 3 display a variety of bioactivities, including induction of growth promotion, differentiation, and metabolic function. Insulin or IGFs bind to the extracellular subunits of their respective receptors and induce intramolecular conformational changes resulting in the activation of the ␤ subunit intrinsic tyrosine kinase activity (1,2). Activated receptor kinases phosphorylate several intracellular substrates, including insulin receptor substrates (IRSs), Shc Cbl, or Crk. Tyrosine phosphorylation of these substrates leads to their binding to several intermediate signaling molecules containing SH2 domains. In particular, the binding of the p85 regulatory subunit of PI 3-kinase to IRSs results in the recruitment and activation of the p110 catalytic subunit (3,4). The interaction of Grb2 with tyrosine-phosphorylated IRS and Shc leads to activation of the small GTP-binding protein Ras through the plasma membrane recruitment of the guanyl nucleotide exchange factor SOS (3,5,6). These interactions result in activation of PI 3-kinase cascade and Ras-MAPK cascade, respectively (7,8). It has become clear that activation of these two cascades play important roles in a variety of insulin or IGF actions. Insulin is well known to induce translocation of glucose transporter 4 (Glut4), which is expressed in muscle and adipose tissue, from multiple intracellular compartments to the plasma membrane, leading to an enhancement of glucose uptake. It is well established that activation of PI 3-kinase activity, generation of the PI 3,4,5-trisphosphate, activation of the downstream effector Akt, and subsequent phosphorylation of Akt substrate, AS160, are necessary events required for the insulin stimulation of Glut4 translocation and glucose uptake (9 -17). Thus, the IRS family proteins play essential roles as intermediate mediators for this signal transduction pathway central to the biological actions of both insulin and IGF-I receptors. Four members of IRS family proteins (IRS1-4) have been identified to date (8). These IRS family proteins share two highly homologous amino-terminal regions, the pleckstrin * This work was supported in part by Grant-in-aid for International Joint Research 08044193 (to S.-I. T.), Grant-in-aid for Scientific Research A 16208028 (to S.-I. T.) from the Ministry of Education, Science, and Culture of Japan, and by the Program for Promotion of Basic Research Activities for Innovative Biosciences (to F. H.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. 1 Present address; Research Institute for Health Fundamentals, Ajinomoto Co. homology (PH) domain and the phosphotyrosine binding (PTB) domain, which play important roles in the interaction with receptor tyrosine kinase. However, the carboxyl-terminal region is not conserved except for the tyrosine residues possibly phosphorylated by the receptor tyrosine kinases (8). In addition to tyrosine phosphorylation, there are a multiple putative serine/threonine (Ser/Thr) phosphorylation sites in the carboxyl-terminal region, several of which are reported to be involved in the modulation of insulin-induced IRS tyrosine phosphorylation and play important roles in modulation of insulin or IGF signals (18 -24). For example, in 3T3-L1 adipocytes pretreatment with tumor necrosis factor-␣ reduced insulin-induced glucose uptake through an impairment of insulinstimulated IRS-1 tyrosine phosphorylation (25)(26)(27). In contrast, we found that in rat FRTL-5 thyroid cells, chronic pretreatment with thyrotropin markedly potentiated DNA synthesis in response to IGF-I (28,29). Detailed analyses showed that thyrotropin pretreatment enhanced IGF-I-induced IRS-2 tyrosine phosphorylation, resulting in the augmentation of IGF-I signals (30,31). Nevertheless, in both experimental model systems, in vitro phosphorylation assays demonstrated that Ser/Thr phosphorylation and association of some proteins with IRS are related to alterations of IRS tyrosine phosphorylation (26). Thus the identification of IRS-associated proteins is an essential prerequisite for understanding the alteration of IRS tyrosine phosphorylation. Thus, this study was undertaken to isolate proteins that interact and modulate IRS-mediated signals. By using yeast two-hybrid screening, we have cloned a cDNA that encodes a protein known as 53BP2S, a short form splicing variant of ASPP2. Our data indicate that 53BP2S plays an important regulatory role by modulating the relative extent of IRS-1 versus IRS-2 tyrosine phosphorylation and subsequent downstream signals mediating insulin-stimulated glucose transport. EXPERIMENTAL PROCEDURES Materials-Dulbecco's modified Eagle's medium (DMEM), phosphate-buffered saline (PBS), and Hanks' buffered saline solution were purchased from Nissui (Tokyo, Japan). Calf serum and fetal bovine serum were obtained from JRH Bioscience (Tokyo, Japan). Penicillin and streptomycin were obtained from Ban'yu Pharmaceutical Co. (Tokyo, Japan). Dr. Takaaki Aoyagi (Institute of Microbial Chemistry, Tokyo, Japan) generously provided leupeptin and pepstatin. The phosphotyrosine monoclonal antibodies PY20 and 4G10 were from Sigma and ICN (Irvine, CA). Polyclonal IRS-1 and IRS-2 antibodies were prepared by immunizing rabbits with synthetic peptides as reported previously (30). GFP polyclonal antibody was purchased from BD Biosciences. The FLAG M2 monoclonal antibody and HA antibody were from Sigma. Phospho-Akt-specific antibody (Ser-473) and phospho-ERK-specific antibody were from Cell Signaling (Beverly, MA). The 53BP2 antibody was purchased from BD Biosciences. The monoclonal ASPP2 antibody clone DX54.10 was from Sigma. The Myc and AS160 antibodies were purchased from Upstate (Charlottesville, VA). Phospho-AS160 (Thr-642)-specific antibody was from BioSource (Camarillo, CA). LY294002 was obtained from Sigma. Alexa Fluor 596 anti-mouse IgG and Alexa Fluor 488 anti-mouse IgG were purchased from Molecular Probes (Eugene, OR). All dishes, plates, and flasks were obtained from IWAKI (Tokyo, Japan). Other chemicals were of the reagent grade and available commercially. Plasmid Construction-We obtained pBS-Bbp, containing a short form splicing variant of ASPP2, and 53BP2S (residues 123-1128 of ASPP2), a kind gift from Dr. Louie Naumovski (Stanford University School of Medicine, Stanford, CA) (33). IRS-1 cDNA was a kind gift from Dr. Takashi Kadowaki (Graduate School of Medicine, the University of Tokyo, Tokyo, Japan). IRS-1 cDNA containing full-length open reading frame was amplified by PCR using two primers, 5Ј-GGGGCATATG-GCGAGCCCTCCGGATA-3Ј and T7 primer. The PCR product was digested by NdeI and BamHI and was cloned into the NdeI-BamHI site of the pAS2-1 vector (BD Biosciences). The resulting plasmid was named pAS-IRS-1 and used for two-hybrid screening as bait. pGEX vectors were used for expression of fusion proteins with GST in Escherichia coli. By digesting pACT-53BP2S, the EcoRI-PstI, PstI-EcoRI, or EcoRI-EcoRI fragment, which encodes only ankyrin repeats, only the SH3 domain, or both domains, was cloned into the pGEX vector in-frame, yielding pGEX-ANK, pGEX-SH3, or pGEX-53BP2S, respectively. These plasmids were used for expression and purification of fusion proteins with GST in E. coli. pACT-ANK and pACT-SH3 were constructed in pACT2 vector (BD Biosciences) by the same way as pGEX-ANK and pGEX-SH3. pIRS-3 and pGFP-IRS-4 were constructed as described before (34). pIRS-2 was constructed as follows. Briefly, EcoRI fragment containing the full length of IRS-2 was cloned into pcDNA3. pFLAG-IRS-1, which expresses FLAG-tagged IRS-1, was constructed as follows. Full length of IRS-1 open reading frame was amplified by PCR using two primers, 5Ј-CCCCGATATCAAC-TATGGCGAGCCCTCCG-3Ј and T3 primers. The EcoRV-BamHI fragments of the PCR product was cloned into pCMV-FLAG-2 vector in-frame. Convenient restriction enzymes were used to construct the plasmids expressing some deletion mutants of IRS-1 fused with GFP. pGFP-D13 or pGFP-D14, which is a plasmid expressing IRS-1 D13 or IRS-1 D14 deletion mutant fused with GFP, respectively, was constructed as follows. IRS-1 fragment encoding the amino acid residues 660 -861 or 750 -861 was amplified by PCR using two primers, 5Ј-AGATGAAAGCTTCCAGTGG-3Ј and T3 primer or 5Ј-CCAGAAGCTTCCCAGCACAAGCC-3Ј and T3 primer. Amplified fragments were digested by HindIII and BamHI and cloned into pEGFP-C1 in-frame. BamHI fragment containing 53BP2S, a short form splicing variant of ASPP2 from the pBS-Bbp, was cloned into pEGFP-C1 or pCMV-FLAG-2 to express GFP-tagged or FLAG-tagged 53BP2S, respectively, in mammalian cells. pHA-Akt2 or pFLAG-AS160 was a kind gift from Dr. Morris Birnbaum (University of Pennsylvania). Yeast Two-hybrid Screening-pAS-IRS-1, which expresses the full length of IRS-1 fused with Gal4-DNA binding domain in yeast, was used as bait. cDNA library expressing human placental cDNA fused with Gal4 activation domain was obtained from BD Biosciences. Yeast strain CG1945 was used for the library screening. CG1945 cells were transformed with pAS-IRS-1 and human placental cDNA library by a lithium method. Transformants that could grow in the medium lacking leucine, tryptophan, and histidine and containing 0.5 mM 3-aminotriazole were isolated. Only the colonies that turned blue in the ␤-galactosidase assay (described below) were selected. Plasmids from the candidate transformants were recovered from them into E. coli and further studied. ␤-Galactosidase Assay-Yeast transformants were grown for 2 days on the nylon filter in the medium lacking leucine and tryptophan. The colonies on the filter were frozen by liquid nitrogen and incubated for 30 min at 30°C in Z buffer (38.6 mM ␤-mercaptoethanol, 1 mg/ml 5-bromo-4-chloro-3indolyl-␤-D-galactopyranoside (X-gal), 60 mM Na 2 HPO 4 , 40 mM NaH 2 PO 4 , 10 mM KCl, 1 mM MgSO 4 ). When colonies turned blue, these gene products were scored as having a positive interaction. Transient Transfection of COS-7, CHO-C400, and 3T3-L1 Adipocytes-The expression plasmids were transfected into COS-7 cells using the DEAE-dextran procedure. Briefly, cells were grown to be subconfluent on 100-mm dishes. The medium was then changed to 4 ml of transfection buffer (DMEM containing 10 g of DNA, 0.3 mg/ml DEAE-dextran, 50 mM Tris-HCl, pH 7.4). After 4 h of incubation, the medium was aspirated, and the cells were treated for 2 min with 4 ml of glycerol buffer (DMEM supplemented with 10% glycerol, 50 mM Tris-HCl, pH 7.4). Subsequently, the cells were washed three times with Hanks' buffered salt solution and cultured in 6 ml of growing medium for further 44 h. Then the cells were used for pulldown assay. CHO-C400 cells were transfected with expression plasmids by the calcium phosphate precipitation method. Briefly, cells were grown to be subconfluent on 100-mm dishes. One ml of DNA solution (10 g/ml plasmid DNA, 50 mM Hepes, pH 7.05, 0.7 mM Na 2 PO 4 , 125 mM CaCl 2 ) was incubated at room temperature for 30 min and added to the cultured dishes. Four hours later, medium was changed by fresh medium, and cells were cultured for additional 2 days and then used for some experiments. Transient transfection of 3T3-L1 adipocytes was described previously (35). Briefly, fully differentiated 3T3-L1 adipocytes were detached from the tissue culture plates by trypsin buffer (0.25% trypsin, 0.02% EDTA in PBS), and the cells were collected by centrifugation and then washed twice with PBS. The cells were resuspended in 0.5 ml of PBS and electroporated at 1.5 mV and 0.95 mA (GenePulser II, Bio-Rad). DMEM containing 10% fetal bovine serum was added to the electroporated cells; the cells were then allowed to adhere to tissue culture dishes for 24 h, and the adipocytes were then serum-starved for 2 h before experiments. In some experiments, the electroporated adipocytes were seeded on coverslips. Purification of GST Fusion Proteins-pGEX plasmid was transformed into E. coli BL21 (DE3) pLysS. Isopropyl ␤-D-thiogalactopyranoside was added to 1 mM in the final concentration, and the expression of GST fusion protein was induced overnight at 26°C. Cells were harvested and resuspended in PBS with 1% Triton X-100 and lysed by sonication three times for 30 s on ice. The lysates were centrifuged, and supernatant was added to the glutathione-Sepharose column pre-equilibrated in the PBS buffer. The column was washed with PBS three times, and the GST fusion proteins were eluted by elution buffer (50 mM Tris-HCl, pH 8.0, 10 mM reduced glutathione). Eluted solution was fractionated by 1 ml. Each fraction was subjected to protein assay using protein assay kit (Bio-Rad). The most concentrated fraction was used for experiments. GST Pulldown Assay-COS-7 cells were transfected with expression plasmids as described above. Two days after transfection, cells were harvested by cold lysis buffer (50 mM Tris-HCl, pH 8.0, 1 mM EDTA, 150 mM NaCl, 1% Nonidet P-40, 100 kallikrein-inactivating units/ml aprotinin, 20 mg/ml phenylmethanesulfonyl fluoride, 10 mg/ml leupeptin, 5 mg/ml pepstatin). The lysates were centrifuged at 14,000 ϫ g for 20 min at 4°C. The supernatant was subjected to protein assay using the protein assay kit. Cell lysates (1 mg of protein) were incubated with 100 pmol of purified GST, GST-53BP2S, GST-ANK, or GST-SH3 fusion protein at 4°C for 2.5 h. Forty l of glutathione-Sepharose beads (50% (v/v)) was then added, and incubation was continued for additional 2.5 h. Sepharose beads were collected by centrifuge and washed three times with washing buffer containing 50 mM Tris-HCl, 1 mM EDTA, and 0.1% Triton X-100. Bound proteins were subjected to SDS-PAGE, transferred to nylon membrane, and immunoblotted with the indicated antibody. Analyses of Insulin Signaling in CHO-C400-CHO-C400 cells transfected with pEGFP-53BP2S were grown to confluency, and the quiescent cells were stimulated with insulin (100 nM) for indicated times. Cell extracts were prepared in lysis buffer, and 1 mg of total lysate protein was used for immunoprecipitation with IRS-1, IRS-2, or 4G10 antibody. Precipitants were separated by 8% SDS-PAGE and immunoblotted with 4G10 or p85 antibody. One hundred g of total cell lysates were separated by 12% SDS-PAGE and immunoblotted with phos-pho-Akt-specific antibody (Ser-473) or with phospho-ERK antibodies. Immunofluorescence Analysis-Electroporated 3T3-L1 adipocytes were washed once with PBS and fixed/permeabilized with a solution containing 3.7% formaldehyde and 0.2% Triton X-100 in PBS for 10 min at room temperature. Cells were then washed with PBS and incubated with blocking buffer (1% bovine serum albumin and 5% donkey serum in PBS) for 1 h at room temperature, and primary antibodies (1:200 for anti-Myc, 1:100 for phospho-Akt-specific antibody (Ser-473), or 1:100 for FLAG antibody) were added for 1 h at room temperature. The samples were again washed with PBS, incubated with a secondary antibody conjugated to Texas Red (1:100 dilution), Alexa Fluor 488 (1:1000 dilution), or Alexa Fluor 596 (1:1000 dilution) for 40 min and washed, and the coverslips were mounted on Vectashield for visualization using a Zeiss LSM510 confocal microscope. Assay of the Glut4 Translocation to the Plasma Membrane-Fully differentiated 3T3-L1 adipocytes were transfected with pEGFP vector or pEGFP-53BP2S along with pGlut4-myc by Proteins precipitated by glutathione-Sepharose beads were subjected to immunoblotting (IB) with indicated antibodies. Input represents an aliquot corresponding to 1% of the lysates used in each binding reaction. These are representative immunoblots independently performed three times. D, interaction of 53BP2S mutants with IRS-1 was analyzed in pulldown assay. Cell lysate of COS-7 expressing FLAG-tagged IRS-1 was incubated with purified GST, GST-53BP2S, GST-ANK, or GST-SH3. Proteins precipitated by glutathione-Sepharose beads were subjected to immunoblotting with anti-FLAG antibody. Input represents an aliquot corresponding to 3% of the lysates used in each binding reaction. These are representative immunoblots independently performed three times. electroporation. Twenty four hours later, cells were serumstarved for 2 h followed by stimulation with or without insulin (100 nM) for 20 min. Cells were fixed, permeabilized, and followed by incubation with a Myc antibody. The ratio of the cells displaying Glut4-myc plasma membrane fluorescence was determined by counting the 50 cells that were co-expressing both the GFP constructs and Glut4-myc in three independent experiments. Quantification of Glut4 translocation was determined as follows. Fully differentiated 3T3-L1 adipocytes were transfected with 200 g of pFLAG vector or 200 g of pFLAG-53BP2S along with 50 g of pGlut4-myc-eGFP by electroporation. Cells were serum-starved for 2 h followed by stimulation with or without insulin (100 nM) for 20 min. Cells were fixed without permeabilization and incubated with a Myc antibody. The ratio of Glut4 translocation was determined by comparison of the total Myc fluorescence intensity with the total GFP intensity. Statistical Analysis-Results are expressed as means Ϯ S.E. For comparison, the data were analyzed by analysis of variance followed by Student's t test, and the difference was considered significant at p Ͻ 0.05. Identification of Proteins That Interact with Insulin Receptor Substrate-1-To identify proteins that interact with IRS-1, the fulllength rat IRS-1 cDNA was fused with the Gal4 DNA binding domain in pAS2-1 vector and used as bait for a two-hybrid screening. We screened a human placental cDNA library fused with Gal4 activation domain in pACT2, and from 1 ϫ 10 6 clones, 109 IRS-1 interacting candidates were identified. Among them, 33 clones included cDNAs encoding 14-3-3 isoforms (␤, ⑀, and ), which were previously shown to interact with IRS-1 or IRS-2 (36). In addition to the various clones identified, two contained the same insert, a region of ASPP2 cDNA sequence corresponding to amino acid residues 881-1128. ASPP2 is an 1128-amino acid protein that consists of four ankyrin repeats domain and an Src homology (SH) 3 domain in the carboxyl-terminal region (Fig. 1A). ASPP2 was originally isolated as 53BP2 or the Bcl-2 binding protein Bbp (33,37). Recently, it was reported that 53BP2 is a 1005amino acid protein and that ASPP2 contained an additional 123 amino acids to the amino terminus of 53BP2. In addition, 53BP2 is an alternative splicing variant of ASPP2, and they proposed that we call this splicing variant 53BP2S (38). In this study, we used 53BP2S as an ASPP2 splicing variant. 53BP2S Interacts with IRS Family Proteins in Vitro-The clones isolated in the two-hybrid screen contained the carboxyl-terminal region that includes the SH3 domain and ankyrin FIGURE 2. Interaction of 53BP2S with IRS-1 deletion mutants in the GST pulldown assay. Schematic structure of IRS-1 protein is shown. Two boxes represent PH or PTB domain, respectively. Boxes above the IRS-1 structure indicate the 53BP2S binding domains. Below the IRS-1 structure, constructs of some deletion mutants (D1-D14) are shown. Each deletion mutant was fused with GFP at the amino-terminal end. Plasmids expressing each deletion mutant of IRS-1 (D1-D14) were transfected into COS-7. Cell lysate of COS-7 expressing each deletion mutant was incubated with purified GST or GST-53BP2S. Proteins precipitated by glutathione-Sepharose beads were subjected to immunoblotting with anti-GFP antibody. The results of each pulldown assay are shown on the right side of the deletion constructs. Input represents an aliquot corresponding to 1% of the lysates used in each binding reaction. DECEMBER 28, 2007 • VOLUME 282 • NUMBER 52 To confirm the interaction of 53BP2S with IRS-1 by the two-hybrid assay, 53BP2S was tested in a GST pulldown assay. Isolated 53BP2S cDNA (amino acid residues 758 -1005) fused with GST (GST-53BP2S) was constructed in a pGEX vector. GST-53BP2S protein or GST alone was then expressed and purified from E. coli. (Fig. 1B). Cell lysates isolated from COS-7 cells expressing full-length FLAG tagged IRS-1 (FLAG-IRS-1) were incubated with purified GST-53BP2S or GST only. Protein complexes precipitated with glutathione-Sepharose beads were separated by SDS-PAGE, and FLAG-IRS-1 bound to GST-53BP2S was detected by immunoblotting with the FLAG antibody. FLAG-IRS-1 associated with GST-53BP2S but not with GST alone (Fig. 1C), indicating that 53BP2S specifically associates with IRS-1 in vitro as well as in the yeast two-hybrid assay. Four members of IRS family proteins, IRS-1, IRS-2, IRS-3, and IRS-4, have been identified to date. Among them, PH and PTB domains are highly conserved, but other regions are divergent except for the tyrosine residues possibly phosphorylated by receptor tyrosine kinases (2). Fig. 1C shows that IRS-2, IRS-3, and GFP-tagged IRS-4 could also interact with 53BP2S (Fig. 1C). Interaction of IRS-1 with 53BP2S Requires Both Ankyrin Repeats and SH3 Domain of 53BP2S-To investigate which region of 53BP2S is required for the interaction with IRS-1, two-hybrid ␤-galactosidase and GST pulldown assays were carried out. pACT-ANK or pACT-SH3 was constructed to contain only ankyrin repeats or only the SH3 domain in pACT2 vector, respectively (Fig. 1A). In the two-hybrid system, neither interaction between IRS-1 and the ankyrin repeats nor between IRS-1 and the SH3 domain was detectable by the ␤-galactosidase assay (Table 1). Similarly, we examined the precipitation of IRS-1 using an ankyrin repeat (GST-ANK) and an SH3 domain fusion protein (GST-SH3), respectively (Fig. 1B). Neither GST-ANK nor GST-SH3 could pull down FLAG-tagged IRS-1 (Fig. 1D), indicating that both the ankyrin repeats and the SH3 domain are required for the interaction with IRS-1. Consistent with these data, the interaction of 53BP2S with p53 and Bcl-2 is also required for both the ankyrin repeats and SH3 domain (33,37,39). 53BP2S Interacts with PTB Domain and a Central Region (Amino Acid Residues 750 -861) of IRS-1-To identify regions of IRS-1 that are required for the interaction with 53BP2S, we examined IRS-1 deletion mutants fused with GFP (D1-D14), and pulldown assays were performed for each mutant (Fig. 2). An IRS-1 mutant that contains only the PH domain (D8) could not, whereas a mutant that contains the PTB domain (D9) could interact with GST-53BP2S (amino acid residues 758 -1005). These data indicate that the IRS-1 PTB domain, but not the PH domain, is sufficient for the interaction with 53BP2S. Surprisingly, a mutant in which PH and PTB domains were both deleted (D5) still interacted with 53BP2S. More detailed analyses identified another 53BP2S binding domain, a central region containing 112 amino acid residues (750 -861), that was sufficient for the interaction with IRS-1 (Fig. 2). The cross-reactivity of the four IRS proteins (Fig. 1C) probably results from the interaction of 53BP2S with the PTB domains, as these domains are highly homologous between the isoforms. Endogenously Expressed 53BP2S Interacts with IRS-1 in 3T3-L1 Adipocytes-We next investigated the interaction of endogenously expressed 53BP2S with IRS-1. To confirm the expression of 53BP2S in 3T3-L1 cells, RT-PCR was carried out using total cellular RNA from 3T3-L1 preadipocytes. 53BP2S cDNA fragment was not amplified from first strand cDNA without SuperScript2 (RTϪ), whereas 53BP2S cDNA fragment -1 and 53BP2S. A, left panel, total RNA was extracted from 3T3-L1 preadipocytes. Using this total RNA, first strand cDNA was synthesized with (RTϩ) or without (RTϪ) SuperScript2. PCR was carried out using these first strand cDNA as templates. PCR condition was as follows: 30 cycles of sequential incubations at 94°C for 30 s, 50 or 55°C for 60 s, and 72°C for 60 s followed by final extension at 72°C for 5 min. Right panel, total RNA was extracted from 3T3-L1 cells at differentiation stage 0, 2, 4, 6, and 8 day. PCR condition was 25 cycles of sequential incubation at 94°C for 30 s, 55°C for 60 s, and 72°C for 60 s for both 53BP2S and 36B4. These are representative data independently performed five times. B, cell lysates of fully differentiated 3T3-L1 adipocyte cells were immunoprecipitated (IP) by IRS-1 antibody or preimmune serum. Total cell lysates of HEK293 cells or 3T3-L1 adipocytes and immunoprecipitants by IRS-1 antibody were immunoblotted (IB) with anti-ASPP2 antibody (Sigma) or anti-IRS-1 antibody. Input represents an aliquot corresponding to 1% of the lysates used in immunoprecipitation assay. C, cell lysate of quiescent 3T3-L1 adipocyte cells or stimulated with insulin (100 nM) for 2 min was incubated with purified GST or GST-53BP2S. Proteins precipitated by glutathione-Sepharose beads were detected by anti-IRS-1 antibody. Input represents an aliquot corresponding to 10% of the lysates used in each binding reaction. These are representative immunoblots independently performed three times. D, pEGFP or pEGFP-53BP2S was transiently transfected into 3T3-L1 adipocytes by electroporation. Cells were serum-starved for 2 h followed by stimulation with insulin or without insulin for 2 min. Cell lysates were immunoprecipitated by IRS-1 antibody or preimmune serum and immunoblotted with anti-53BP2 antibody. was amplified when PCR was carried out using first strand cDNA with SuperScript2 (RTϩ), indicating that 53BP2S mRNA was expressed in 3T3-L1 preadipocytes (Fig. 3A). RT-PCR confirmed the expression of 53BP2S in differentiated 3T3-L1 adipocytes. The expression of 53BP2S was increased by differentiation of 3T3-L1 adipocytes, suggesting that 53BP2S plays an important role in differentiated 3T3-L1 cells (Fig. 3A). Immunoblotting of 3T3L1 adipocyte extracts demonstrated the presence of a specific protein band that migrated identical human ASPP2 expressed in human embryonic kidney cell HEK293 cells (Fig. 3B). More importantly, immunoprecipitation of endogenous IRS-1 resulted in the co-immunoprecipitation of the endogenous 53BP2S in 3T3-L1 adipocytes (Fig. 3B). These data demonstrate that 53BP2S and IRS-1 directly interact in vivo. 53BP2S Interacts with IRS-1 Not through Recognition of Phosphotyrosine Residues-Because yeast has few tyrosine kinases, the expressed IRS-1 is not tyrosine-phosphorylated. To investigate whether the interaction of IRS-1 with 53BP2S could be modulated by IRS1 tyrosine phosphorylation, precipitation assays using phosphorylated IRS-1 or nonphosphorylated IRS-1 was carried out. Briefly, cell lysates of quiescent or insulin-stimulated 3T3-L1 adipocyte cells were incubated with GST-53BP2S (amino acid residues 758 -1005) or GST alone. Endogenous IRS-1 protein bound to GST-53BP2S was visualized by immunoblotting with anti-IRS-1 antibody. As expected, IRS-1 proteins derived from quiescent 3T3-L1 adipocyte cells were precipitated by GST-53BP2S (Fig. 3C). IRS-1 proteins precipitated in lysates isolated from either quiescent or insulinstimulated cell did not display any immunoreactivity against a phosphotyrosine-specific antibody. In addition, the amount of IRS-1 co-precipitated by 53BP2S was markedly decreased following insulin stimulation by immunoblotting with an IRS-1 antibody (Fig. 3C). Taken together, these data indicate that formation of a 53BP2S-IRS-1 complex is inhibited by insulin stimulation, most likely because of IRS-1 tyrosine phosphorylation. To evaluate the mechanism of interaction between IRS-1 and exogenously expressed 53BP2S in adipocytes, fully differentiated 3T3-L1 adipocytes were electroporated with pEGFP or the pEGFP-53BP2S expression vectors. Electroporated cells were starved for 2 h, and cell lysates were immunoprecipitated with an IRS-1 antibody. As observed in the GST pulldown assays, 53BP2S was co-immunoprecipitated with IRS-1 (Fig. 3D). In addition, insulin stimulation resulted in a decreased amount of 53BP2S that co-immunoprecipitated with IRS-1 (Fig. 3D). These data are consistent with the GST pulldown results and further document that post-translational modification of IRS-1, possibly tyrosine phosphorylation, inhibited the interaction between IRS-1 and 53BP2S. Effect of 53BP2S Expression on Insulin Signals in CHO-C400 Cells-To examine biological function of 53BP2S in insulin/ IGF-I signaling mediated by IRS proteins, we assessed the effect of 53BP2S expression on insulin signaling. GFP or GFP-53BP2S expression vectors were introduced into CHO-C400 cells by calcium phosphate transfection, and the activation of signaling targets in response to insulin was assessed. Expression of 53BP2S resulted in an increase in insulin-stimulated IRS-2 tyrosine phosphorylation, whereas IRS-1 tyrosine phosphorylation was de-creased (Fig. 4B). In addition, the amount of IRS-1-associated p85 PI 3-kinase was decreased in 53BP2S-transfected cells, whereas the amount of p85 PI 3-kinase in IRS-2-immunocomplex was enhanced compared with GFP-transfected cells (Fig. 4A). To measure the total amount of p85 PI 3-kinase associated with total tyrosine-phosphorylated proteins, whole cell lysates were immunoprecipitated with phosphotyrosine antibody (4G10), and the amount of p85 PI 3-kinase in the immunocomplex was measured. Tyrosine phosphorylation of proteins at 180 kDa was enhanced by 53BP2S expression. Similarly, the amount of p85 PI 3-kinase immunoprecipitated by 4G10 was enhanced by 53BP2S overexpression (Fig. 5A). These data suggested that PI 3-kinase activation was elevated in 53BP2S-expressing cells. Because Akt activation was induced by PI 3-kinase, we next determined Akt activation in 53BP2S-expressing cells. It is well known that full activation of Akt kinase activity requires PDK1-dependent phosphorylation of threonine 308 followed by PDK2 phosphorylation on serine 473 (40). Consistent with an increase in the amount of p85 PI 3-kinase associated with tyrosine-phosphorylated proteins, insulin-stimulated Akt serine 473 phosphorylation was enhanced by GFP-53BP2S expression without any change in ERK activation (Fig. 5B). These results indicate . Effects of 53BP2S expression on insulin-induced IRS tyrosine phosphorylation. A, CHO-C400 cells were transfected with pEGFP or pEGFP-53BP2S by the calcium phosphate method. Forty eight hours later, cells were starved in serum-free medium and stimulated with or without insulin for the indicated time. Cell lysates were prepared from each cell and immunoprecipitated (IP) by anti-IRS-1 antibody or anti-IRS-2 antibody. Immunoprecipitants were separated by SDS-PAGE and immunoblotted (IB) with anti-phosphotyrosine antibody (4G10) or anti-p85 antibody. B, CHO-C400 cells transfected with pEGFP or pEGFP-53BP2S were starved in serum-free medium and stimulated with or without insulin for 2 min. Cell lysates were prepared from each cell and immunoprecipitated by anti-IRS-1 antibody or anti-IRS-2 antibody. Immunoprecipitants were separated by SDS-PAGE and immunoblotted with anti-phosphotyrosine antibody, anti-IRS-1 antibody, or anti-IRS-2 antibody. Quantified data from each blot are shown in the graphs below the immunoblot images. Values are the means Ϯ S.E. of three different experiments and expressed as relative to insulin-stimulated GFP-transfected control. *, difference between GFP-expressing cells and GFP-53BP2S-expressing cells with insulin stimulation is significant with p Ͻ 0.05. DECEMBER 28, 2007 • VOLUME 282 • NUMBER 52 that the enhancement of IRS-2 tyrosine phosphorylation and association of p85 PI 3-kinase with IRS-2 compensated for the reduction of IRS-1 tyrosine phosphorylation and p85 PI 3-kinase association with IRS-1 in these cells. The net effect results in a 53BP2S enhancement of Akt activation. Novel Function of 53BP2S in the Insulin Signaling Effect of 53BP2S Expression on PI 3,4,5-P 3 Production and Akt Activation in 3T3-L1 Adipocytes-To determine the effect of 53BP2S expression on insulin signals in 3T3-L1 adipocytes, we measured the insulin stimulation of PI 3-kinase pathway activation in 53BP2S-expressing cells. We took advantage of the pleckstrin homology (PH) domain of Grp1 fused to GFP that specifically interacted with PI 3,4,5-P 3 . Fully differentiated 3T3-L1 adipocytes were transfected with pFLAG and pGrp-PH-GFP or pFLAG-53BP2S and pGrp-PH-GFP. Twenty four hours later, cells were serum-starved for 2 h, followed by stimulation with insulin for 5 min. Cells were fixed, permeabilized, and stained with the FLAG antibody. Under basal conditions, Grp-PH-GFP probe was primarily localized to nuclei (Fig. 6). However, insulin stimulation induced translocation of the probe to the plasma membrane. In FLAG-tagged 53BP2S-expressing cells, the plasma membrane staining of Grp-PH-GFP could be observed, indicating that PI 3,4,5-P 3 was normally produced in response to insulin even in 53BP2S-expressing cells (Fig. 6). Taken together, these data suggested that PI 3-kinase was activated in 53BP2S-expressing cells. To investigate the effect of 53BP2S expression on Akt activation in vivo, we established the single cell assay of Akt activation. At first, adipocyte cells were stained with phospho-Akt-specific antibody (Ser-473) with or without insulin stimulation. We found the nuclei staining, which is independent of insulin stimulation, but we also observed staining on the plasma membrane, which is dependent of insulin stimulation (Fig. 7A). The plasma membrane labeling was specific for Akt activation, as it was prevented by the PI 3-kinase-specific inhibitor LY294002 (Fig. 7A). Although the number of cells displaying the Akt staining on the plasma membrane under basal conditions was essentially 0%, insulin stimulation resulted in greater than 90% of the cells phospho-Akt-positive. Also in 53BP2S-expressing cells, insulin-induced phospho-Akt staining was normally observed (Fig. 7B). In this assay, we could not detect the enhancement of insulin-induced Akt phosphorylation by 53BP2S expression. To compare the relative effect of 53BP2S expression on the extent of Akt, we co-transfected 3T3-L1 adipocytes with 50 g of pHA-Akt2 and 200 g of pGFP or 50 g of pHA-Akt2 and 200 g of pGFP-53BP2S. In this way, most cells expressing HA-Akt2 were expected to also express GFP or GFP-53BP2S. Cell lysates were prepared from these transfectants and immunoprecipitated with the HA antibody, followed by immunoblotting with phospho-Akt-specific antibody (Ser-473). As shown in Fig. 7C, HA-Akt2 was activated following insulin stimulation that was enhanced by 53BP2S expression. Expression of 53BP2S Inhibits the Glut4 Translocation Induced by Insulin in 3T3-L1 Adipocytes-Because insulinstimulated Glut4 translocation is an important readout for insulin signaling, we next examined the effect of 53BP2S in the 3T3-L1 adipocytes. Fully differentiated 3T3-L1 adipocytes were transfected with plasmid expressing GFP and Glut4-myc or GFP-53BP2S and Glut4-myc. Twenty four hours later, cells were serum-starved for 2 h, followed by stimulation with insu- lin for 20 min. Cells were fixed, permeabilized, and stained with the Myc antibody (Fig. 8A). Fifty cells expressing both Glut4myc and GFP or Glut4-myc and GFP-53BP2S were counted, and the percentage of cells in which Glut4-myc was translocated to plasma membrane is shown in Fig. 8B. Insulin stimulation resulted in an ϳ4-fold increase in the number of transfected cells displaying Glut4 translocation to the plasma membrane (16.7 Ϯ 1.8 to 58.2 Ϯ 4.3%) in GFP-transfected cells. In contrast, overexpression of GFP-53BP2S significantly inhibited insulin-stimulated plasma membrane Glut4 translocation (14.5 Ϯ 1.6 to 23.9 Ϯ 3.3%). Under these conditions, there was no effect of 53BP2S on the basal level of Glut4 translocation demonstrating that 53BP2S specifically inhibited insulin signal events necessary for Glut4 translocation. To confirm this apparent inhibition of insulin-stimulated Glut4 translocation by 53BP2S, we next quantified the extent of Glut4 translocation using a double-labeled Glut4 reporter (Glut4-myc-eGFP). Briefly, 3T3-L1 adipocytes were electroporated with 200 g of pFLAG vector or 200 g of pFLAG-53BP2S along with 50 g of pGlut4-myc-eGFP by electroporation. Cells were fixed without permeabilization and incubated with Myc antibody. We assessed the relative Myc fluorescent in comparison with the total cell GFP fluorescence. The results further demonstrate that 53BP2S-expressing cells have a marked attenuation of insulin-stimulated Glut4 translocation (Fig. 8C). Because 53BP2S expression enhanced Akt activation and the downstream readout of Glut4 translocation was inhibited, 53BP2S must affect another target in the Glut4 translocation pathway. Recently it was reported that insulin induced phosphorylation of AS160 (Rab-GAP), one of Akt substrate, followed by activation of Rab10 was required for Glut4 translocation in 3T3-L1 adipocytes (16,17). To evaluate the effect of 53BP2S expression on the AS160 phosphorylation, we co-transfected 3T3-L1 adipocytes with 50 g of pFLAG-AS160 and 200 g of pGFP or 50 g of pFLAG-AS160 and 200 g of pGFP-53BP2S. Cell lysates were prepared from and immunoprecipitated with the FLAG antibody, followed by immunoblotting with phospho-AS160-specific antibody (Thr-642). As shown in Fig. 8D, FLAG-AS160 was phosphorylated following insulin stimulation, and this phosphorylation was repressed by 53BP2S expression. DISCUSSION This study was undertaken to identify proteins, which interact with IRS and modulate insulin signaling. Using yeast twohybrid screens, we isolated 53BP2S as one of IRS-interacting proteins. The interaction between 53BP2S and IRS proteins was confirmed by using both pulldown and co-immunoprecipitation binding assays. Expression of 53BP2S reduced insulininduced IRS-1 tyrosine phosphorylation with a concomitant enhancement of IRS-2 tyrosine phosphorylation in CHO-C400 cells. In addition, Akt activation was enhanced by 53BP2S expression. Consistent with these data, Akt activation was enhanced by 53BP2S expression also in 3T3-L1 adipocytes; however, Glut4 translocation to plasma membrane in response to insulin was significantly inhibited by 53BP2S expression. Previous studies demonstrated that chronic tumor necrosis factor-␣ pretreatment inhibited insulin-induced IRS-1 tyrosine phosphorylation, leading to a decrease in insulin sensitivity (26). Previously, we reported that chronic thyrotropin pretreatment enhances the IGF-I-induced IRS-2 tyrosine phosphorylation, leading to augmentation of IGF-I-induced DNA synthesis (28 -31). Although multiple studies have demonstrated that post-translational modification, such as Ser/Thr phosphorylation, plays important roles in modulating IRS tyrosine phosphorylation (18 -24), we have also observed that various IRS-associated proteins can have dramatic effects on IRS phosphorylation. These data indicated the possibility that IRS-associated proteins play important roles in modulation of insulin/IGF bioactivities. In this study, we identified the interaction regions of IRS-1, PTB domain, and the central region with 53BP2S. Although PTB domain is highly homologous among IRS family proteins, the central region is a unique sequence of IRS-1 compared with other IRS members, suggesting that the binding mechanism of 53BP2S with IRS-1 or IRS-2 could be different. This might account for the opposite effect of 53BP2S on tyrosine phosphorylation of IRS-1 and IRS-2. On the other hand, we also identified the interaction region of 53BP2S. Both SH3 and ankyrin repeat domains were required for association, indicating that SH3 and ankyrin repeats domains cooperatively form the conformational structure that is important for this interaction. The interaction domains of IRS-1 with 53BP2S did not contain the putative tyrosine phosphorylation sites, and this interaction was clearly detected in the basal state. These data suggest that 53BP2S interacts with IRS-1 in the absence of tyrosine phosphorylation and that 53BP2S interaction modulates IRS tyrosine phosphorylation mediated by the insulin receptor tyrosine kinase. Thus, 53BP2S is a likely candidate for a protein p85 PI 3-kinase regulatory subunit result in the preferential association of the catalytic subunit with p50 and enhanced insulin sensitivity (51). Thus, adequate cellular distribution or isoform-selective activation of IRS, PI 3-kinase, and Akt could be disturbed by overexpression of 53BP2S. This disturbance could account for the inability of activated Akt to phosphorylate AS160. In addition, several studies have suggested the presence of additional signaling pathways that may function in concert with IRS-1/PI 3-kinase signaling through caveolin-enriched lipid raft microdomains (52,53). Although we have not assessed the effect of 53BP2S on lipid raft-dependent signaling, there is no evidence that the IRS/PI 3-kinase functions through these microdomains (52). However, it is more likely that 53BP2S inhibition could occur by blocking additional IRS-1mediated signals. In addition, we could not rule out the possibility that 53BP2S inhibited the Akt downstream pathway required for Glut4 translocation independent of modulation of IRS-mediated insulin signals. 53BP2S and ASPP2 protein has been also isolated as a binding partner of many proteins, including p53, Bcl2, NF-B, PP1, and APCL (33,37,39,54,55), suggesting that 53BP2S/ASPP2 is involved in apoptotic pathways. IGF-I and IRS-1 are also well known to have anti-apoptotic activity. Ueno et al. (56) showed that IRS-1 interacts with Bcl-2 and has an anti-apoptotic effect. It is possible that Bcl-2 and 53BP2S/ASPP2 compete each other for interacting with IRS and regulate anti-apoptotic role IRS proteins. Whether the interaction between IRS-1 and 53BP2S/ ASPP2 is involved in apoptosis awaits further study. In summary, we have found that 53BP2S protein directly interacts with IRS family proteins both in pulldown and coimmunoprecipitation assays independent of IRS tyrosine phosphorylation, and we have identified the specific binding regions responsible. Importantly, 53BP2S functions to regulate IRS isoform tyrosine phosphorylation induced by insulin, resulting in modulation of insulin signals. These lead to attenuation of Glut4 translocation to the plasma membrane by as of yet unknown mechanisms.
Identification of a Cardiac Glycoside Exhibiting Favorable Brain Bioavailability and Potency for Reducing Levels of the Cellular Prion Protein Several strands of investigation have established that a reduction in the levels of the cellular prion protein (PrPC) is a promising avenue for the treatment of prion diseases. We recently described an indirect approach for reducing PrPC levels that targets Na,K-ATPases (NKAs) with cardiac glycosides (CGs), causing cells to respond with the degradation of these pumps and nearby molecules, including PrPC. Because the therapeutic window of widely used CGs is narrow and their brain bioavailability is low, we set out to identify a CG with improved pharmacological properties for this indication. Starting with the CG known as oleandrin, we combined in silico modeling of CG binding poses within human NKA folds, CG structure-activity relationship (SAR) data, and predicted blood–brain barrier (BBB) penetrance scores to identify CG derivatives with improved characteristics. Focusing on C4′-dehydro-oleandrin as a chemically accessible shortlisted CG derivative, we show that it reaches four times higher levels in the brain than in the heart one day after subcutaneous administration, exhibits promising pharmacological properties, and suppresses steady-state PrPC levels by 84% in immortalized human cells that have been differentiated to acquire neural or astrocytic characteristics. Finally, we validate that the mechanism of action of this approach for reducing cell surface PrPC levels requires C4′-dehydro-oleandrin to engage with its cognate binding pocket within the NKA α subunit. The improved brain bioavailability of C4′-dehydro-oleandrin, combined with its relatively low toxicity, make this compound an attractive lead for brain CG indications and recommends its further exploration for the treatment of prion diseases. Introduction The cellular prion protein (PrP C ) is widely understood to undergo conformational conversions that underlie a group of neurodegenerative diseases, known as prion diseases [1]. Several lines of evidence also point toward PrP C as a prominent receptor for the cellular docking of oligomeric Aβ peptides that mediate toxicity in Alzheimer's disease [2]. Experimental ablation of the prion gene does not appear to confer severe phenotypes in mice [3] or cattle [4], a finding that has since been corroborated when a goat with a natural bi-allelic nonsense mutation in the prion gene was identified [5]. Consistent with these observations, human 23andMe data indicate that individuals with only one functional PRNP allele can reach advanced age in good health [6]. Critically, mice lacking the Prnp gene are refractory to prion disease, and Prnp heterozygosity approximately doubles the prion disease survival time [7]. Moreover, when PrP C levels were suppressed in prion-infected mice after early spongiform degeneration or cognitive and neurophysiological prion disease symptoms were beginning to manifest, a partial rescue of these phenotypes could be observed [8,9]. Consequently, reducing steady-state levels of PrP C may be safe and may have merit for the treatment of prion diseases and Alzheimer's disease. To date, attempts to identify PrP C lowering drugs through screens of compound libraries have largely failed, with some of the best lead compounds either requiring relatively high concentrations to exert their effect or lacking favorable ADME characteristics [10,11]. Recent results from a study that targeted the stability of PrP C transcripts by treating prioninfected mice with antisense oligonucleotides (ASOs) provided elegant proof-of-concept validation of the premise that lowering steady-state PrP C levels can extend survival of prion-infected mice in a dose-dependent manner [12]. Robust data for the use of ASOs to address human brain disease exist in the form of advances in the treatment of spinal muscular atrophy [13]. Perhaps more relevant, the treatment of cynomolgus macaques with ASOs designed to induce the destruction of transcripts coding for the tau protein led to approximately 50% reductions in tau protein levels in several brain structures [14]. Challenges associated with adapting this approach for the treatment of human prion diseases relate to the need to inject ASOs periodically through the intrathecal route because mRNA levels are shown to recover two months following bolus injections [15], and limitations in the delivery of ASOs to deep brain structures, a caveat that may be exacerbated in human adults due to their relatively large brain sizes [16]. Thus, to date, no treatment has reached the clinic that can effectively reduce human brain PrP C levels by targeting the expression or stability of this protein directly. We recently discovered that PrP C is surrounded by Na,K-ATPases (NKAs) in the brain [17], which led us to propose an indirect targeting approach [18]. We showed that when cardiac glycosides (CGs), a class of compounds also known as cardiotonic steroids, bind to NKAs, it causes their internalization and degradation. Rather than this representing a selective internalization of only the ligand-receptor complex, we observed that CG exposure of human neurons and astrocytes causes other NKA-associated molecules, including PrP C , to also be removed from the cell surface and degraded. A replacement of key amino acid residues in the CG-docking site of ATP1A1 in the human ReN VM cell model prevented the CG-dependent PrP C degradation, indicating that the latter is dependent on CG ligand-ATP1A1 receptor engagement in this paradigm, as opposed to some other non-specific effect of CGs on the cell. Finally, we learned that the degradation of PrP C under these circumstances relies on the endo-lysosomal system and specifically involves the cysteine protease cathepsin B [18]. CGs have a long history of use in the clinic for the treatment of heart diseases [19][20][21]. More recently, compounds from this class have been considered for other uses, including the treatment of cancers [22][23][24]. In addition, the notion to widen the use of CGs to the treatment of stroke and neurodegenerative diseases has had some traction [25]. To date, brain indications of CGs are limited by the relative narrow therapeutic window and poor blood-brain barrier (BBB) penetration or brain retention of members of this compound class [26,27]. Arguably most attention in this regard has been paid to a CG known as oleandrin, which can be derived from the ornamental shrub Nerium oleander [28,29] and has shown to accumulate to relatively high levels in brain tissue [30,31]. However, oleandrin has recently been shown to exhibit inadvertent cardiotoxicity that exceeded the toxicity of other CGs and may limit its use [32]. We therefore set out to identify a CG with more favorable characteristics for the objective to lower steady-state PrP C levels in the brain. To this end, we initially modeled the predicted binding of oleandrin to human NKAs expressed in the brain. Next, we considered available structure-activity relationship (SAR) data to narrow the possible chemical space for modifying oleandrin to those changes that can be attained with good yields. The in silico evaluation of a combinatorial CG library through a virtual screen revealed a small number of derivatives with promising BBB penetrance and docking scores. Subsequent work focused on C4 -dehydro-oleandrin, hereafter termed KDC203. We will show that KDC203 has similar potency as oleandrin in cell-based assays yet exhibits improved brain penetrance. In fact, 24 h after subcutaneous injection brain levels of KDC203 were higher than its levels in heart, kidney, or liver. Moreover, we document that KDC203 is a lesser substrate than oleandrin of the human multi-drug transporter MDR1, known to be responsible for the rapid extrusion of other CGs, and is projected to reach an unbound concentration exceeding 10 nM in the brain. Finally, we show that exposure of cells of human neuronal and astrocytic lineage to low nanomolar KDC203 levels causes a profound reduction in the steady-state PrP C levels through a mechanism of action that requires direct engagement of KDC203 with its NKA target. Study Design A vast majority of commercially available CGs exhibit poor brain bioavailability, either because they are too hydrophilic or are actively extruded from the brain [33][34][35]. To our knowledge, no large libraries of less investigated CGs are commercially available. The de novo synthesis of CGs is challenging and requires dozens of steps [22,[36][37][38][39]. These realities dissuaded us from pursuing a screen-based approach for the identification of a CG that exhibits favorable characteristics for brain indications. Instead, we decided to build on an existing CG, oleandrin, that has shown promising brain bioavailability and employ a study design that pairs rational design with chemical derivatization to improve its properties for our indication. In this approach, a small number of compounds are shortlisted on the basis that they are predicted to have high blood-brain-barrier penetrance and are chemically accessible with high purity and yield through a small number of derivatization steps. Because oleandrin served as the reference compound in this study, we subsequently compared key pharmacological and biochemical properties of KDC203 and oleandrin side-by side ( Figure 1). To be able to compare in silico the NKA binding properties of CGs for which no highresolution binding data are available, a suitable model needed to first be generated and evaluated. Several high-resolution models have been published for shark and porcine In Silico Modeling of Oleandrin Binding to Human NKA α Subunit To be able to compare in silico the NKA binding properties of CGs for which no high-resolution binding data are available, a suitable model needed to first be generated and evaluated. Several high-resolution models have been published for shark and porcine NKAs [40][41][42][43]. These show CGs to occupy a binding pocket that can be accessed from the extracellular space and is molded from amino acid residues contributed by the α subunit ( Figure 2A). Importantly, different CGs show highly conserved binding to this site even in NKA structures captured in different ion binding states ( Figure 2B). To assess if building any human NKA model for the intended objective based on the porcine structures would be meaningful, we initially compared the sequence similarity between human and porcine NKA α subunits ( Figure 2C), which indicated almost perfect sequence identity amongst ATP1A1 orthologs and showed that even human ATP1A2 and ATP1A3 paralogs share 87% sequence identity to porcine Atp1a1. Critically, the alignment of amino acid residues known to line the CG binding pocket in porcine Atp1a1 with the homologous human residues, predicted that key properties of the binding pocket are maintained in human α subunits, including the existence of previously described hydrophobic and charged internal faces ( Figure 2D). Assured that model building based on the porcine structure is likely to provide relevant predictions for the human NKA α subunits, we focused our modeling on ATP1A1 and ATP1A3. These human NKA α subunits are expressed in brain neurons, and unless models can be built for them, the whole undertaking might be futile. To this end, we used the Glide [44] docking module in the Schrödinger suite (Version 2020-3) to predict binding poses for oleandrin and to evaluate the likelihood of binding poses to exist naturally based on the free binding energies (∆G) associated with them. Two main methods were used to evaluate the proposed ligand-protein complexes, the Molecular Mechanics-Generalized Born Surface Area (MM-GBSA) model ( Figure 2E) and the slower Molecular Mechanics-Poisson Boltzmann Surface Area (MM-PBSA) method [45], which computes the free binding energy by subtracting the free energy of the docked ligand-receptor complex from the free energies of its separate components [46] ( Figure 2F). The latter revealed a free binding energy of −62.56 kcal/mol for a surface area-optimized binding pose of oleandrin within the canonical CG binding pocket of ATP1A3. The subsequent comparison of this binding pose revealed exquisite structural alignment with the experimentally observed ouabain binding pose, validating it to be a highly plausible naturally occurring pose ( Figure 2G). A parallel screen of the pertinent literature for information on structure-activity relationship (SAR) data and features that may increase the brain bioavailability of CGs ( Figure 2H) directed our attention toward the CG sugar moiety and opportunities to derivatize C16 within the steroid core. ( Figure 2F). The latter revealed a free binding energy of −62.56 kcal/mol for a surface area-optimized binding pose of oleandrin within the canonical CG binding pocket of ATP1A3. The subsequent comparison of this binding pose revealed exquisite structural alignment with the experimentally observed ouabain binding pose, validating it to be a highly plausible naturally occurring pose ( Figure 2G). A parallel screen of the pertinent literature for information on structure-activity relationship (SAR) data and features that may increase the brain bioavailability of CGs ( Figure 2H) directed our attention toward the CG sugar moiety and opportunities to derivatize C16 within the steroid core. Evaluation of Chemically Accessible Oleandrin Derivatives Based on Binding Score and Brain Bioavailability Our subsequent efforts focused on 270 CGs that we anticipated to be chemically accessible through combinatorial derivatization of five CG scaffolds ( Figure 3A). Rather than synthesizing these CGs, we decided to shortlist molecules in this compound library that were predicted to have promising binding characteristics and improved brain bioavailability (relative to oleandrin) through a virtual screen. Initially, compounds were evaluated on the basis of physicochemical properties that determine the propensity for brain penetrance, which can be computed as a multi-parameter optimized (Version 2) score, first introduced by Zoran Rankovic [48] ( Figure 3B). The score is mostly influenced by the molecular size and hydrogen-bonding capacity of a molecule. A maximum Rankovic MPO.v2 score for an ideal brain penetrant compound would be 6.0. Once the protonation state and partial charges were considered, the number of virtual screen entries increased to 330. Using our human ATP1A3 model and the MM-PBSA docking method, these entries gave rise to 815 binding poses, a majority of which could be dismissed due to unfavorable internal energies or docking to a site that deviated from the canonical CG binding site. Data obtained when evaluating the 20 CG permutations defined in Scaffold 1 illustrate a typical result ( Figure 3C). In this case, six of the 20 CGs were rejected due to excessive internal energies, or a binding pose that deviated fundamentally from the canonical CG binding pose. Rankovic MPO.v2 scores assigned to the remaining 14 CGs associated with this scaffold ranged between 2.33 and 3.00 and therefore provided no improvement of predicted brain penetrance relative to oleandrin. Finally, Glide docking scores for these CGs ranged from −4.60 to −7.35, indicating that a subset of these CGs were predicted to have improved free binding energies, relative to oleandrin. Overall, 146 CGs passed the filters we had applied. Amongst these, Scaffold 4-derived CGs were remarkable because they exhibited the widest breadth of Rankovic MPO.v2 scores and were consistently predicted to improve binding to ATP1A3. In contrast, all Scaffold 5-derived CGs were predicted to exhibit improved brain penetrance but only a subset of these exhibited improved free binding energy relative to oleandrin ( Figure 3D). Synthesis of Lead Compound with Favorable Characteristics Results from the virtual screen recommended a small number of molecules for further analyses on the basis that they were predicted to exhibit comparable or improved binding to human ATP1A3 and to pass the BBB better than the oleandrin reference compound, which was computed to have a Glide docking score of −6.4 and an intermediate Rankovic MPO.v2 score of 3.0 ( Figure 4A). More specifically, we shortlisted eight molecules, KDC201 to KDC208 ( Figure 4B,C), prioritizing KDC203 (4 -dehydro-oleandrin) for this study because (i) it can be easily obtained through derivatization of oleandrin [49], (ii) constitutes an intermediate for the synthesis of KDC201-KDC206, and (iii) was predicted to have a Rankovic MPO.v2 score of 3.75. To put this result into perspective, it is helpful to know that ten percent of a diverse mix of 324 marketed central nervous system (CNS) drugs give rise to a Rankovic MPO.v2 score below 3.4, and the median Rankovic MPO.v2 score of the same sample of CNS drugs computed to 4.7 [48]. Although not pursued here, replacing the C4 hydroxyl with an amine is expected to further increase binding to ATP1A3 (KDC207) ( Figure 4D). Similarly, introducing a C2 methoxy group (KDC201) or a C3 isopropyl moiety (KDC205) within the KDC203 sugar were predicted to further improve binding to ATP1A3 ( Figure 4E). Finally, a gain in the Glide docking score was consistently attained when we replaced hydrogen atoms within the C16 acetoxy group of the oleandrigenin core with fluorine atoms (KDC202, KDC204, KDC206, KDC208) ( Figure 4F). To move from in silico to biochemical analyses, we obtained KDC203 through oxidization of oleandrin in dichloromethane followed by HPLC clean-up as a white solid with >95% purity (assessed by 1 H NMR). A stably tritiated version of this compound was obtained commercially through a customized labeling request. The assignment of protonation states and partial charges increased total CG ligands evaluated to 330, a number that further increased to 850 once alternative binding poses, predominantly caused by freely rotating bonds were considered. The elimination of binding poses with unfavorable internal energy or deviation from the hypothetical oleandrin binding pose led to a 146 CGs that passed these filters. (C) Exemplary chart depicting results from the evaluation of Scaffold 1 CG derivatives. The position of Scaffold 1 combinatorial candidate CGs within the chart can be deduced from the R1 and R chemical modifications defined on the two axes. The color scheme reflects Rankovic MPO.v2 scores [48] (a high score, represented by a green square, indicates high predicted BBB penetrance) and the size of squares reflects docking strength (a low docking score, represented by a large square, indicates strong binding). The absence of a square indicates that no binding pose that passed filter criteria (see above) was found. Asterisks indicate sites used for the attachment of the respective combinatorial moieties. (D) Summary chart depicting results from the evaluation of Rankovic MPO.v2 scores and docking scores for all five scaffolds. Note that CGs derived from Scaffold 4 had similar docking scores as derivatives from other scaffolds but excelled based on their high predicted BBB penetrance. As a reference, we show the position of oleandrin in this chart at the intersection of the two blue lines. Analysis of Brain Bioavailability To assess the bioavailability of KDC203 in a model whose NKAs respond similarly to CGs as their human orthologs, wild-type rodents would not be a good choice, because they have long been known to exhibit more than hundred-fold lower sensitivity toward a large range of CGs than most other mammals [50][51][52], a difference that has since been attributed to specific differences in Atp1a1 amino acids lining the CG binding pocket. Rather than moving to larger mammals, whose ATP1A1-genes may be more similar to the human ortholog in this regard, we pursued these studies with Atp1a1 gene-edited mice engineered to carry two point mutations in the first extracellular loop, namely amino acid exchanges Q111R and N122D (amino acid numbering as for the PDB entry 4HYT) coded by the Atp1a1 gene (Atp1a1 S/S ) [53]. These point mutations are understood to sensitize this α subunit toward CG binding by making it more "human-like" [54,55]. One day following the subcutaneous injection of five Atp1a1 S/S mice per cohort with tritiated oleandrin or KDC203, the mice were sacrificed, transcardiac perfused with phosphatebuffered saline (PBS) and radioisotope levels determined in their brains, hearts, kidneys, and livers ( Figure 5A). These analyses revealed that KDC203 achieved significantly higher brain levels than oleandrin. Notably, a mean total concentration of 30.46 nM tritiated KDC203 levels was observed in the brains of five mice, a value that exceeded KDC203 levels in the heart 3.5-fold, and even surpassed mean kidney and liver levels. Taken together, this experiment established KDC203 to exhibit, relative to oleandrin, the improved brain bioavailability that its higher Rankovic MPO.v2 score (3.75 for KDC203 versus 3.0 for oleandrin) had predicted. Transepithelial Diffusion and Transport To compare the rate of transcellular passive diffusion and active transport of KDC203 and oleandrin, we made use of canine MDCK epithelial cells, which are known to provide a high level of monolayer integrity through tight junctions limiting paracellular diffusion. More specifically, a second generation MDCKII cell line was employed in these experiments that had been generated by deleting the canine MDR-1 transporter and replacing it with the human MDR1 gene. The permeability screen was conducted as a bidirectional assay. Thus, first, the rate of transepithelial flow from apical to basolateral (Papp A-B ) was recorded, revealing a slightly lower value of 7.06 × 10 −6 for oleandrin than KDC203, which crossed the epithelium at a rate of 7.24 × 10 −6 cm/s ( Figure 5B), with both compounds showing a considerably higher permeability than published values of 0.5 × 10 −6 cm/s for digoxin [56,57]. A striking difference between oleandrin and KDC203 became apparent when measuring the reverse rate of transport from basolateral to apical (Papp B-A ), which indicated that oleandrin was being actively extruded at 145.79 × 10 −6 cm/s, more than twice the 70.60 × 10 −6 cm/s rate of KDC203 extrusion. Consequently, efflux ratios (ERs), which are computed as the ratio of the rates of extrusion versus intrusion, exceeded for both CGs values of 2, thereby establishing both CGs as hMDR1 substrates. Consistently, when we next added the MDR-1 inhibitor Zosuquidar, the extrusion of KDC203 was reduced by more than 50%, further corroborating its hMDR1-dependent extrusion. Free Versus Bound Fraction Next, we measured the free unbound (f u ) concentrations of oleandrin and KDC203 in rapid equilibrium dialysis (RED) experiments. Deploying brain homogenates of Atp1a1 S/S mice, guinea pigs, or humans, we observed that the f u,brain values of oleandrin were lower than the corresponding values for KDC203, indicative of lower unspecific binding of KDC203 to biological surfaces within the brain ( Figure 5C). This difference reached robust significance (p < 0.005) when studying brain homogenates of Atp1a1 S/S mice and guinea pigs and barely missed significance with human brain homogenates. . Improved brain bioavailability of KDC203 relative to oleandrin. (A) Oleandrin enriches in kidney and liver tissue and its brain and heart levels are similar 24 h following its acute subcutaneous injection as a tritium-labeled compound into cohorts of five mice. In contrast, KDC203 reaches Figure 5. Improved brain bioavailability of KDC203 relative to oleandrin. (A) Oleandrin enriches in kidney and liver tissue and its brain and heart levels are similar 24 h following its acute subcutaneous injection as a tritium-labeled compound into cohorts of five mice. In contrast, KDC203 reaches the highest levels in the brain and its brain levels exceeded its heart levels 3.5-fold 24 h after subcutaneous injection. 'ns' denotes non-significance. Asterisks indicate the level of statistical significance, i.e., one asterisk denotes p < 0.05, with each additional asterisk denoting tenfold lower p-values. (B) KDC203 is a lesser hMDR1 substrate than oleandrin. Recorded values of apparent permeability (Papp) from apical to basolateral (A-B) and vice versa (B-A). The efflux ratio is calculated as Papp B-A /Papp A-B . Zosuquidar was deployed as a selective inhibitor of MDR1. Measurements of the TEER on the day of experiment (day 5) resulted in a mean value of 107 ± 8 Ω·cm 2 (±SD, n = 22). Lucifer Yellow Permeability data (n = 22) were generated prior to the bidirectional assay with oleandrin and KDC203 (n = 3). (C) Rapid equilibrium dialyses of brain and plasma samples establish that KDC203 has a lower propensity than oleandrin to associate nonspecifically with components in the respective extracts, predictive of a higher concentration of free KDC203 that is available for specific engagement with its NKA target. (D) Pertinent pharmacological characteristics of oleandrin and KDC203 computed based on their in vivo tissue and plasma concentrations and the in vitro measurement of their unbound fractions in rapid equilibrium dialyses. From these experimental results, additional pharmacokinetic characteristics of oleandrin and KDC203 could be deduced ( Figure 5D). KDC203 exhibiting a twofold lower efflux ratio than oleandrin predicted an improvement in its CNS exposure that is consistent with the experimentally determined increase in its total brain levels ( Figure 5A). Calculations of K p,uu values for both oleandrin (0.27) and KDC203 (0.44) resulted in values below 1, thereby revealing possible efflux processes at the BBB, with oleandrin again exhibiting a higher degree of transport asymmetry. V u,brain values for oleandrin and KDC203 of 3.98 mL/g and 2.90 mL/g brain, respectively, express the tendencies of the compounds to reside outside of the interstitial fluid (ISF), e.g., bound to brain tissue. Finally, the free mean concentration of these CGs in the brain, expressed as C u,brain , that is influenced by the extent of CNS-delivery as well as its intracerebral distribution computed to 4.61 nM for oleandrin and 10.05 nM for KDC203, thereby predicting the latter to reach a more than twofold higher unbound concentration level in the brain. In Vitro Assessment of Potency in Differentiated ReN VM Cells Because the in silico docking analyses of oleandrin and KDC203 had returned identical Glide scores of −6.40 (Figure 4), we anticipated that both compounds may exhibit similar potencies in an assay that relies on the engagement of these compounds with their cognate NKA α subunit binding site. To evaluate this characteristic experimentally in a relevant model, we turned toward immortalized human cells (ReN VM cells). Others [58] and us [59] had previously shown that within one week of differentiation these cells develop extensive neuritic connectivity and acquire neuronal or astrocytic characteristics, including an upregulation of Tuj-1 or GFAP cell lineage-specific markers. Here, we exposed differentiated ReN VM cultures for a duration of one week to low nanomolar levels of oleandrin or KDC203. Following cell lysis, steady-state ATP1A1 levels were assessed by Western blot analysis to determine if the ligand-receptor interactions had caused the previously observed reduction in ATP1A1 levels [18]. This analysis established that KDC203 had indeed similar potency as oleandrin on the basis that both drugs achieved a similar reduction in ATP1A1 levels when added at 4 nM concentration to the cell culture medium ( Figure 6A). However, whereas oleandrin was toxic at concentrations exceeding 4 nM [18], KDC203 was tolerated by the cells at all concentrations tested in this initial pilot experiment. Remarkably, exposure of ReN VM cells for one week to 8 or 16 nM levels of this compound led to additional reductions in ATP1A1 levels. Further analyses in ReN VM cells established that the reduction in the levels of the NKA α subunit ATP1A1 is paralleled by a lesser yet also pronounced reduction in the levels of the NKA β subunit ATP1B1 ( Figure 6B). As for steady-state PrP C levels, the quantitation of intensities of Western blot bands revealed that a maximum 84% reduction in 3F4-reactive signals was attained when ReN VM cells were exposed for one week to 12 nM concentrations of KDC203 ( Figure 6C). To assess whether the KDC203-dependent reductions in the steady-state protein levels of ATP1A1 and PrP C represent an idiosyncrasy of ReN VM cells or can also be observed in another human neural cell model, we next repeated the analysis with differentiated T98G cells. Noticing that the KDC203-dependent reduction in ATP1A1 was less pronounced in T98G than in ReN VM cells, we extended the Western blot analyses to other NKA α subunits that are known to be expressed in human brains. Remarkably, not only did this cell model recapitulate the KDC203-dependent reduction in PrP C levels but it attained it without exhibiting a similarly profound diminution of the three NKA α subunits ATP1A1, ATP1A2, ATP1A3 ( Figure 6D). Oleandrin and KDC203 reduce steady-state ATP1A1 protein levels to a similar degree. Side-by-side Western blotbased comparison of ATP1A1 signal intensities in cellular extracts derived from ReN VM cells following 7-day treatment with the respective CGs (equal amount of total protein loaded). Because the Western blot was not stripped of the ATP1A1-directed primary and secondary antibodies, before it was stained with Coomassie, a stronger signal can be seen at the 85-90 kDa level, where the ATP1A1 detection antibodies bound. (B) 7-day exposure of ReN VM cells does not only affect ATP1A1 protein levels by also causes a concentration-dependent reduction in the steady-state protein levels of the NKA β subunit ATP1B1 and PrP C . Note that the total amount of protein loaded in the two biological replicate series was not identical for the PrP-directed 3F4 blot; to capture an informative linear range, half the amount of total protein was loaded for samples shown on the right hand side. (C) Quantitation of Western blot signal intensities of ATP1A1 and PrP C following 7-day KDC203 treatment of ReN VM cells at concentrations indicated, with each value being computed from the analysis of three biological replicates. (D) The KDC203-dependent reduction in steady-state PrP C protein levels is not an idiosyncrasy of ReN VM cells but can also be observed in other neural cell models, including differentiated human glioblastoma cells (T98G). Asterisks indicate the level of statistical significance, i.e., one asterisk denotes p < 0.05, with each additional asterisk denoting tenfold lower p-values. KDC203 Is Less Toxic Than Oleandrin Considering the critical role NKAs play in maintaining the electrochemical gradient in all eukaryotic cells, we wondered whether the KDC203-dependent diminution in steadystate ATP1A1 levels that we had observed in ReN VM cells was compensated for by an upregulation of ATP1A2 and/or ATP1A3. To address this point, a repeat KDC203 treatment of differentiated ReN VM cells was undertaken, this time exposing cells to up to 40 nM concentrations of the compound, and cellular extracts were again analyzed for all three NKA α subunits expected to be expressed in these cells ( Figure 7A). Western blot results from this experiment corroborated the robust KDC203-dependent reduction in steady-state levels of ATP1A1 but also revealed that this effect of the compound extends to ATP1A2 levels. More specifically, ATP1A1 levels were diminished in cells treated with KDC203 up to 16 nM levels yet were observed to rebound slightly when KDC203 levels were further increased. In contrast, steady-state ATP1A2 levels declined as levels of KDC203 were increased, with no sign of a rebound at any concentration assessed. Intriguingly, ATP1A3 levels stayed unchanged in the presence of up to 12 nM KDC203 levels, then increased and reached a maximum in the presence of 20-32 nM KDC203 before declining again at 40 nM KDC203 levels. When Western blots from the same cellular extracts were probed with an antibody that detects human PrP C ; its steady-state levels were again revealed to be dramatically reduced in cells exposed to 4-16 nM KDC203. Interestingly, in cells exposed to KDC203 concentrations of 20-32 nM, a slight rebound in the intensity of 3F4-reactive signals was visible. However, the PrP C signals observed under these circumstances were split into two signals that migrated with apparent molecular weights of 30 and 35 kDa, in contrast to the dominant 3F4-reactive signal detected at 32 kDa in naïve ReN VM cells. We have not yet investigated if changes in the N-glycosylation of PrP C are responsible for this altered SDS-PAGE migration pattern of 3F4-reactive bands under these treatment conditions. Importantly, the KDC203 exposure of the cells did not affect bulk protein levels as these were constant for all concentrations tested. These experiments suggest that the ReN VM cell cultures were able to cope with KDC203 concentrations exceeding 8 nM by increasing their steady-state ATP1A3 levels. Next, we interrogated the health of ReN VM cells exposed to oleandrin versus KDC203 by monitoring their metabolic activity, ATP levels, and intracellular calcium levels using fluorescent indicators ( Figure 7B). Setting an arbitrary toxicity threshold that marked the concentration at which all three cellular health indicators dropped to 75% of levels observed in untreated cells, these analyses determined oleandrin to reach this threshold at 2.7 nM versus KDC203 at 46 nM concentrations. Taken together, these experiments corroborated the conclusion that ReN VM cells are better equipped to tolerate double-digit nanomolar concentrations of KDC203 than oleandrin and that a compensatory upregulation of ATP1A3 may play a role in the tolerance toward KDC203. KDC203 induces a compensatory upregulation of ATP1A3, is less toxic than oleandrin, and requires binding to CG binding pocket within ATP1A1 for reducing PrP C levels. (A) The KDC203 concentration-dependent reduction in the steady-state protein levels of ATP1A1, ATP1A2, and PrP C is paralleled by an increase in ATP1A3 levels. Total concentrations of cellular extracts were adjusted, and equal volumes of these adjusted extracts were loaded onto the gel as evidenced by the Coomassie stain. Please note the slight rebound in ATP1A1 signals and the splitting of the 3F4reactive signals into two bands that migrated slower and faster than the corresponding band in vehicle-treated differentiated ReN VM cells exposed to 20-32 nM KDC2039 concentrations. (B) KDC203 exhibits tenfold lower toxicity than oleandrin in three assays that were used to monitor the metabolic health of 7-day differentiated ReN VM cells. Each data point represents the mean of six biological replicates. Concentrations are depicted on a logarithmic axis. (C) Amino acid sequence Figure 7. KDC203 induces a compensatory upregulation of ATP1A3, is less toxic than oleandrin, and requires binding to CG binding pocket within ATP1A1 for reducing PrP C levels. (A) The KDC203 concentration-dependent reduction in the steady-state protein levels of ATP1A1, ATP1A2, and PrP C is paralleled by an increase in ATP1A3 levels. Total concentrations of cellular extracts were adjusted, and equal volumes of these adjusted extracts were loaded onto the gel as evidenced by the Coomassie stain. Please note the slight rebound in ATP1A1 signals and the splitting of the 3F4-reactive signals into two bands that migrated slower and faster than the corresponding band in vehicle-treated differentiated ReN VM cells exposed to 20-32 nM KDC2039 concentrations. (B) KDC203 exhibits tenfold lower toxicity than oleandrin in three assays that were used to monitor the metabolic health of 7-day differentiated ReN VM cells. Each data point represents the mean of six biological replicates. Concentrations are depicted on a logarithmic axis. (C) Amino acid sequence alignment of an NKA α subunit segment contributing to CG binding in wild-type human ATP1A1 protein and its mutated derivative rendered refractory to CG binding by replacing human residues 118 and 129 (numbering based on human ATP1A1 transcript NM_00701.7) with the corresponding mouse Atp1a1 residues. (D) KDC203 causes the expected reduction in the steady-state levels of ATP1A1 and ATP1A2 in differentiated wild-type ReN VM cells but not in the gene-edited ReN VM ATP1A R/R cells whose ATP1A1 protein does not bind CGs. Despite the decrease in ATP1A2 levels, the NKA α subunit that is predominantly expressed in astrocytes, no reduction in the steady-state levels of astrocytic GFAP is observed. Mutagenesis of CG Binding Site within ATP1A1 Abolishes KDC203-Dependent PrP C Reduction Intrigued by our observation that KDC203 suppressed steady-state protein levels of ATP1A1 and ATP1A2 yet increased ATP1A3 levels when ReN VM cells were exposed to concentrations exceeding 16 nM, we wondered if the levels of these α subunits were independently or interdependently affected by KDC203. We had previously observed that the CG-dependent reduction in the steady-state protein levels of ATP1A1 and PrP C could be rescued by rendering the human ATP1A1 protein resistant to high-affinity CG docking [18], thereby establishing the need for CG and NKAs forming a ligand-receptor interaction to achieve this outcome, as opposed to other, less defined effects of CG on the cell. To obtain this result we had mutated the ATP1A1 allele in two positions such that two amino acid residues known to contribute to CG binding are instead coding for the corresponding amino acids present in mouse Atp1a1 [60] ( Figure 7C). Here, we made use of this ReN VM-derived ATP1A1 r/r cell model to investigate whether the KDC203-induced changes to ATP1A2 and ATP1A3 are triggered by KDC203 binding directly to the respective cognate binding sites in these NKA α subunit paralogs or are also caused by KDC203 docking to ATP1A1 and affecting ATP1A2 and ATP1A3 indirectly. Remarkably, abrogating binding of KDC203 to ATP1A1 r/r not only precluded the changes in steady-state levels of ATP1A1 but also prevented changes to the steady-state levels of ATP1A2 and ATP1A3 ( Figure 7D). Moreover, these experiments validated the hypothesis that the KDC203-mediated reduction in steadystate levels of PrP C depends on this CG predominantly forming a ligand receptor complex with ATP1A1 because no KDC203-mediated reduction in PrP C levels was observed in the ATP1A1 r/r cell model. Considering the well-known preferential expression of ATP1A2 and ATP1A3 in astrocytes and neurons, respectively, we next wondered if the KDC203-depedent reduction in ATP1A2 and concomitant increase in ATP1A3 steady-state levels reflected a shift to a more neuronal differentiation state. To this end, we compared levels of the neuronspecific class III β tubulin (Tuj-1) and the astrocytic glial fibrillary acidic protein (GFAP) in ReN VM cells treated with KDC203. Western blot analyses of cellular extracts, which had been adjusted for total protein, revealed that the KDC203-mediated, ATP1A1-dependent decrease in PrP C levels was not paralleled by significant changes in the levels of Tuj-1 or GFAP ( Figure 7D). Taken together this experiment highlighted the importance of KDC203 being able to dock to its cognate binding site on ATP1A1 for all changes in steady-state levels described here. It also suggested that the KDC203-induced shift in the expression profile of NKA α subunits is indicative of a plasticity in the expression of these paralogous subunits, perhaps as part of a compensatory rescue for the loss of ATP1A1, rather than representing a facet of cells undergoing broad astrocyte-to-neuron reprogramming in the presence of KDC203. Discussion This report described a systematic approach to the identification and validation of a CG that offers improved brain bioavailability relative to other molecules within this compound class. The work presented is part of a larger research program aimed at the development of a treatment for prion diseases. It focused on pharmacological properties, potency, and mechanism of action of a lead CG, termed KDC203, that reduces PrP C levels by targeting NKAs in its immediate proximity. Starting with oleandrin, a CG that has shown some promise for brain-related applications [28,29,61,62], we sought to avoid resourceintensive in vitro screens by pairing in silico modeling and BBB penetrance predictions with insights into chemically accessible derivatives. Using this pragmatic approach, a chemical space of more 270 CGs was filtered to less than ten compounds whose Rankovic MPO.v2 scores and Glide scores for binding to NKAs were predicted to equate or surpass the corresponding scores for oleandrin. Subsequent pharmacological and biochemical analyses focused on KDC203, establishing that its brain bioavailability is approximately twofold improved relative to oleandrin. The compound is stable at 36.6 • C for extended periods of time ( Figure S1), and the treatment of ReN VM cells with 12 nM concentrations of KDC203 suppressed steady-state PrP C levels by as much as 84% but had no impact on the overall expression of most proteins in these cells. Our biochemical analyses indicate that at least in ReN VM cells the PrP C level reduction achieved with KDC203 depends on the compound forming a ligand-receptor complex with ATP1A1, which in turn leads to a reduction in ATP1A1 and ATP1A2 levels and an increase in ATP1A3. The CRISPR-Cas9-driven replacement of two amino acids, predicted to contribute to the established CG binding site within ATP1A1, prevented all these changes to the steady-state protein levels of NKAs and PrP C . The ability of certain plant and animal species to synthesize CGs represents an adaptation that serves as a defense against herbivores and predators. Because this system is ancient and predated the existence of organisms with complex brains, an efficiency to pass the BBB is unlikely to have played a role in the evolution of CGs. Instead, compounds within this class act on cells by potently breaking their electrochemical gradient, leading at least in mammals to fatal cardiac arrhythmias as the cause of death. This reality represents both a challenge and opportunity for the objective to identify CGs with improved brain bioavailability; it suggests that the failure of available CGs to pass efficiently into the brain may not reflect a fundamental challenge that even extended fitness selection was not able to solve but rather constitutes an untapped opportunity. KDC203, chemically known as 4 -dehydro-oleandrin, our current lead compound for brain CG applications is not novel. A method for its derivatization from oleandrin was first reported in a patent application from 1975 [63] as one of several CGs that emerged around that time from a larger drug development program of the German Beiersdorf AG. The original inventor described 4 -dehydro-oleandrin as a "good cardiotonic, and particularly suitable for use as a medicament in the treatment of cardiac insufficiency". Its oral toxicity assessed in cats was almost twofold lower than the corresponding values for oleandrin, yet its oral effectiveness-a measure determined by comparing the lethal doses of a given compound following its oral versus intravenous administration-was approximately 20% higher. To our knowledge, the only other mention of this compound in the primary literature can be found in a report from 2016, which described its synthesis next to the synthesis of other C4 -substituted oleandrin derivatives and determined that its cytotoxicity was slightly reduced relative to oleandrin [49]. The authors reported an IC 50 of 46 nM (after 72 h exposure of cells) toward HeLa cells. In this study, we observed cytotoxicity after 7-day treatment of ReN VM cells at KDC203 concentrations upward of 40 nM. A future prion therapeutic needs to provide protection over extended periods. Consequently, the rate of uptake is of lesser concern than the brain concentration of a treatment compound available for target engagement and its safety profile over time. Hence, pharmacologically, the focus should be on the extent to which an orally administered CG accumulates in unbound form, a prerequisite for target engagement, in the brain versus other organs over time. Both the in vitro MDCKII assay data and the calculated K p,uu values suggest that KDC203 will be subjected to active extrusion at the BBB. Encouragingly, the data presented here suggest that KDC203 is a lesser hMDR1 substrate than oleandrin and reaches within 24 h higher levels in the brain than in other tissues we investigated, including the heart. These improved pharmacological properties of KDC203 can be rationalized by the removal of the hydroxyl group, which made this molecule more hydrophobic. A comparison of the BBB penetrance of large numbers of compounds indicated that the capacity of a given compound to form hydrogen bonds correlates inversely with their brain penetrance. In fact, aside from a compound's molecular size no other characteristic has been shown to be more predictive of its brain penetrance [48]. The V u,brain values of oleandrin (3.98 mL/g brain) and KDC203 (2.90 mL/g brain) that we computed are in a comparable range to other CNS-therapeutics like the opioid morphine (2.1 mL/g brain) [64] or the central-acting antihypertensive drug clonidine (5.9 mL/g brain) [65]. Naturally, data from equilibrium dialyses may not accurately predict in vivo distributions, since the homogenization destroys biomolecular structures and, consequently, cannot capture biology that relies on intact cells, including the 'cellular trapping' that can at times be observed in intracellular acidified compartments for basic compounds [65]. The subcutaneous injection of identical doses of oleandrin or KDC203 (0.6 mg/kg) yielded pharmacologically active unbound brain concentrations of 4.61 nM for oleandrin and 10.05 nM for KDC203 ( Figure 5), near the concentration of this compound (12 nM) needed to generate the most profound reduction in PrP C levels in vitro ( Figure 6). The distinct pharmacological properties of oleandrin and KDC203 may reflect differences in non-specific interactions with brain tissue and differences in their total brain levels, as indicated by their unbound drug partitioning coefficients (K p,uu ). A crude estimation based on extrapolations suggests that a 10 nM concentration of unbound KDC203 might be attainable in human brains ( Figure S2). An interesting facet of our results is the observation that ReN VM cells exhibit considerable plasticity and interdependency regarding the steady-state levels of their NKA α subunits. As levels of ATP1A1 decreased upon KDC203 engagement, so did ATP1A2 levels in a manner that depended on the reduction in ATP1A1 levels. In contrast, ATP1A3 levels increased together with KDC203 levels in the cell culture medium, suggesting that the levels of this α subunit are not tied to ATP1A1, and instead may offer some level of functional compensation for the loss of ATP1A1 and ATP1A2. Moreover, our data suggest that in cells the KDC203-dependent reduction in PrP C levels can exceed the reduction in the levels of any of the NKA α subunits. If the most parsimonious explanation still applies here, namely that the same fundamental mechanism of action of a CG-dependent co-internalization of NKAs and PrP C is responsible across cellular paradigms, this finding may reflect differences in the rate at which distinct cell lines can replenish internalized NKAs versus PrP C at the cell surface. Sequence Alignment and Identification of Residues Lining CG Binding Pocket Sequence identity assessments made use of available sequences for S. scrofa AT1A1 (P05024), H. sapiens ATP1A1 (P05023), ATP1A2 (P50993), and ATP1A3 (P13637) using the Basic Local Alignment Search Tool (BLAST), allowing both the filtering of low complexity regions and the introduction of gaps, and using the following settings: E-threshold: 0.001; matrix: BLOSUM62. The multiple sequence alignment algorithm Clustal O (Version 1.2.4, EMBL-EBI, Hinxton, UK) was used to determine homologous residues lining the CG binding pocket. Structural Modeling and Assessment of Docking and BBB Penetrance Scores Because no crystal structure of a human NKA is available, homology models of human α subunits were built on available structures for pig NKAs. Modeling focused on PDB entry 4HYT because it was co-crystallized with a cardenolide CG, namely ouabain, and a Mg 2+ ion at a resolution of 3.4 Å. The availability of another porcine NKA structure, co-crystallized with a bufadienolide, namely bufalin, and two K + ions at a similar resolution (4RES) was initially compared to learn about the degree to which CGs with five-versus six-membered lactone rings and different conformational states of the NKA complex affect the CG binding pose. Based on primary sequence comparisons, it was hypothesized that human NKAs comprising ATP1A3 bound to a cardenolide will exhibit the same macro 3D structure as porcine NKAs. Atomic models of human ATP1A3 bound to ouabain or oleandrin were modeled using soft minimization of ouabain geometries and amino acid residues within 5 Å around it. Binding free energies were approximated using the generalized Born surface area (GBSA) method. Docking studies within the human ATP1A3 model reproduced the binding mode of ouabain reported in the literature and hypothesized a highly similar binding pose for oleandrin. Design of Chemically Accessible Oleandrin Derivatives Scaffolds 1-5 depicted (Figure 3) were designed based on their potential semi-synthetic accessibility (1-5 step sequences) from the commercially available natural products oleandrin and gitoxin. It was envisioned that compounds depicted by Scaffold 1 would be derived from oleandrin by acyl group modification at the C16 position. The compounds denoted by Scaffolds 2 and 3 are derivatives of gitoxin that can be obtained through the introduction of the C16 acyl group followed by a deglycosylation/reglycosylation sequence. The compounds denoted by Scaffolds 4 and 5 represent derivatives of oleandrin, with Scaffold 4 compounds accessible through a known oxidation of the 4 position [66,67] followed by C16 acyl group adjustment. The Scaffold 5 can be derived from Scaffold 4 through a known reductive amination [49]. All compounds were assessed and ranked based on their predicted Rankovic MPO.v2 and in silico docking scores. Synthesis of Shortlisted Oleandrin Derivative The oleandrin derivative KDC203 was synthesized by using a well-established protocol for the PCC oxidation of oleandrin [49,66]. Following the published purification protocols, the synthetic material was found to be >95% pure and was used as such for the subsequent biological studies. Characterization of KDC203 by Mass Spectrometry A 780 mM KDC203 stock solution in dimethyl sulfoxide (DMSO) was stored at 37 • C and sampled after 1, 3, 7 and 14 days. Each sample was frozen at collection. Upon thawing, the samples were spiked with deuterated oleandrin in DMSO then dried in a centrifugal concentrator. The remaining solids were disolved in 0.1% formic acid in a 1:1 methanol:water mix, making the concentration of deuterated oleandrin 2 mM and that of KDC203 equivalent to 10 mM. The solutions were infused at 2 µL per min through a heated electrospray ionization source coupled to an Oribtrap Fusion mass spectrometer. Mass spectra were collected on the orbitrap mass analyzer at a nominal resolution of 60,000 with the automatic gain control target held at 4.0 e 5 , and 30 s of data from each sample were averaged for quantification. Animal Husbandry The use of Atp1a1 s/s mice [55] was kindly authorized by Dr. Jerry B Lingrel (Department of Molecular Genetics, Biochemistry and Microbiology, University of Cincinnati College of Medicine, Cincinnati, OH 45267-0524, USA) and the animals were obtained from NOD. The mice were kept at no more than five mice per cage on a 12 h artificial day/night cycle. Cage changes occurred once a week. The mice were subjected to daily health checks for activity and overall appearance. Drinking water was available ad libitum and the mice were fed a protein chow (18%). All animal procedures were in accordance with the Canadian Council on Animal Care, reviewed and authorized by the University Health Network Animal Care Committee and approved under Animal Use Protocol 6182. Tritium-Based Comparison of Bioavailability of Oleandrin and KDC203 Tritiated KDC203 with a specific activity of 1.6 Ci/mmol and a concentration of 1.0 mCi/mL was obtained through a customized radiolabeling request (Moravek Inc., Brea, CA, USA). To minimize rapid back exchange of tritium with available protons, the radiolabeled product was overwhelmed with non-labeled water and organic solvent three times. This procedure achieved an amount of exchangeable tritium <0.1%. The certificate of analysis attested to a 100% radiochemical purity on the basis that 99.68% of radiolabeled material co-eluted with the unlabeled reference standard on a Zorbax SX 4.6 × 250 mm column (catalog number 959990-912, Agilent Technologies Canada, Mississauga, ON, Canada) using a mobile phase composed of 26% methanol, 26% acetonitrile, 48% water and 0.1% TFA (v/v). Subcutaneous bolus injections of Atp1a1 s/s mice with these tritiated compounds, were followed 24 h later by tissue dissection. To this end, the mice were deeply anesthetized by isoflurane inhalation, then exsanguinated (with concomitant blood collection) by twominute transcardiac perfusion with PBS. Brain, liver, kidney, and heart tissues were rapidly dissected. Tissues were homogenized in a RIPA buffer composed of 1% NP40, 0.5% DOC, 0.1% SDS, 150 mM NaCl, 100 mM Tris (pH 8.3) in 2 mL straight-sided test tubes with O-ring seals using 3 × 1 min bead beating pulses (Beadbeater-8, Biospec, Bartlesville, OK, USA) and 200 mg of 1 mm diameter zirconia beads (catalog number 11079110zx, Biospec). Tissue extracts containing tritiated CGs were transferred to vials preloaded with scintillation fluid, and their counts were determined by scintillation counting (LS6500 Liquid Scintillation Counter, Beckman Coulter, Brea, CA, USA). Concentration series, obtained by spiking known amounts of the tritiated compounds into extracts of the respective tissues from naïve mice, were used to translate experimentally obtained scintillation counts into CG concentrations. These analyses yielded total brain (C total,brain ), plasma (C total,plasma ), liver (C total,liver ) and kidney (C total,kidney ) concentrations as well as values for the total amount of the CGs of interest in the respective tissues (A brain ). Bidirectional MDCK Assay The transcellular fluxes of CGs of interest were analyzed using Madin-Darby canine kidney Type II (MDCKII) cells seeded onto semi-permeable membranes. The permeability screen was conducted as a bidirectional assay encompassing the investigation of flux from the apical (A) to the basolateral (B) side and vice versa. This assay design provides the ability to elucidate the influence of extrusion pumps on the total flux. Here, the focus was on the human ATP-dependent translocase (ABCB1), often referred to by its alternative names P-glycoprotein 1 or multidrug resistance protein 1 (MDR1), which has been reported to extrude several CGs from the brain [56,68]. We followed the convention of considering compounds MDR1 substrates if their efflux ratios, derived by dividing coefficients for B to A fluxes with those for A to B fluxes, exceed values of 2. According to complementary Food and Drug Administration (FDA) guidelines, the addition of Zosuquidar, a highly selective, potent, and non-competitive MDR1 inhibitor [69], should decrease the efflux ratios by at least 50% to reliably classify compounds of interest as MDR1 substrates. 12-well cell culture inserts with 0.4 µm pore size polycarbonate membranes (catalog number 140652, Thermo Fisher Scientific), deployed for the transport assay, were preincubated with cell culture media (see above) for 60 min to improve cell attachment. To monitor monolayer formation, the transepithelial electrical resistance (TEER) was measured with the EVOM2 meter (World Precision Instruments, Sarasota, FL, USA). The measured resistance was multiplied by membrane surface area (Ω·cm 2 ) and normalized by background resistance from a non-seeded insert. Five days after cell seeding, the confluent cell layers were twice washed and preincubated for 30 min at 37 • C with Hank's balanced salt solution (HBSS) (catalog number 14025, Thermo Fisher Scientific) supplemented with HEPES (catalog number 15630080, Thermo Fisher Scientific), hereafter referred to as HBSS-HEPES. The membrane integrity was further evaluated by measuring Lucifer Yellow (LY) (catalog number 07-200-156, Thermo Fisher Scientific) transepithelial leakage. To this end, inserts were transferred to a new 12-well plate containing HBSS-HEPES, and 100 µM LY solution was added to the inserts. After moderate shaking of the Nunc plates at 37 • C for one hour at 70 rpm (to prevent the formation of an unstirred layer), the concentration of leaked LY was measured on the basolateral side at excitation/emission wavelengths of 428/536 nm using a SpectraMax I3 plate reader (Molecular Devices, San Jose, CA, USA). Only monolayers exhibiting an apparent permeability (Papp) of Lucifer Yellow of less than 5 × 10 −8 cm s and TEER values above 97 Ω·cm 2 , consistent with published resistance values [56,70], were included in subsequent analyses. Next, inserts that passed inclusion criteria were transferred to new 12-well plates. The apical (A, insert) volume was 500 µL and the basolateral (B, well) volume was 1000 µL. For monitoring A→B versus B→A transport, a final concentration of 1 µM of tritiated CG of interest, dissolved in HBSS-HEPES, was added to the inserts or basolateral well, respectively. When investigating the influence of MDR1 inhibition on transport, 5 µM Zosuquidar-HCl (catalog number SML1044, Sigma-Aldrich, St. Louis, MO, USA) was added to both compartments. All transport experiments were conducted in biological triplicates under moderate shaking (70 rpm) for a duration of 77 min. To determine Papp, efflux ratio, and recovery, three aliquots were extracted from both compartments and their radioactivity counts measured with a LS6500 Liquid Scintillation Counter (Beckmann Coulter). Recovery values over 70% were recorded for all biological replicates. The following formula was used to calculate the apparent permeability coefficient Papp: In this equation the term dQ/dt describes the amount of compound in the receiver compartment after a certain incubation time and therefore the rate of permeability. A represents the membrane surface area and C 0 is the starting concentration in the donor compartment. The efflux ratio was calculated as the fraction of the Papp coefficients from the two transport types B → A and A → B: To assess the reliability of the experimental results, recovery was determined as following, where C f describes the final concentration after incubation: Rapid Equilibrium Dialysis Rapid equilibrium dialyses (RED) were undertaken to determine unbound fraction of oleandrin and KDC203 in brain (f u,brain ) and plasma (f u,plasma ). More specifically, postmortem brain tissue from Atp1a1 s/s mice, guinea pigs and human frontal lobes were employed in these analyses. The origins of the human frontal lobe material were described before [71]. Briefly, this brain tissue is held in the brain bank of the Tanz Centre for Research of Neurodegenerative Diseases that was adopted from a former Canadian Brain Tissue Bank at the Toronto Western Hospital. The specific frontal lobe tissue used for these analyses originated from one male and one female who had died in their 70s of non-dementia causes. All tissues were homogenized in PBS and spiked with the respective tritiated CGs before being equilibrated against PBS buffer in a RED device (catalog number 89809, Thermo Fisher Scientific). Plasma was collected from Atp1a1 s/s mice and guinea pigs. The RED device was placed in a shaker orbiting at 225 rpm for five hours. Samples were then transferred from the RED device into scintillation vials for scintillation counting. The in vitro data from these RED analyses revealed the diluted unbound fraction (f u,d ) of the respective CGs by determining the concentration of the tritiated CGs through scintillation counting of volume aliquots harvested from the red-colored sample input side and white-colored sample output side. Since plasma samples were undiluted, f u,d,plasma equals f u,plasma [72]: To account for the dilution of the brain homogenates, the following equation corrected for the dilution factor D [73]: The RED-based measurements of the unbound fractions (f u ) of CGs in brain and plasma homogenates was used to estimate the unbound CG concentrations in the respective organs (C u ) that was achieved following the subcutaneous bolus injections of tritiated CGs into Atp1a1 s/s mice (see above): With this information in hand, the unbound drug partitioning coefficient K p,uu was computed from the fraction of unbound CGs in brain and plasma [64]: Finally, the unbound volume of distribution in the brain (V u,brain ) was calculated by dividing the total amount of compound in the brain (A brain ) and the unbound fraction in the brain (C u,brain ): V u,brain mL g = A brain ng g brain C u,brain ng mL Treatment of ReN VM Cells with KDC203 ReN VM Human Neural Progenitor Cells (catalog number SCC008, Millipore Sigma) were grown in their undifferentiated form on 20 µg/mL Cultrex reduced growth factor basement membrane (catalog number 3433, R&D Systems, Minneapolis, MN, USA) in DMEM/F12 (catalog number 11320033, Thermo Fisher Scientific) based media with 0.22 micron filter sterilized 20 ng/mL human basic fibroblast growth factor (bFgF)(catalog number RKP09038, Reprokine, St. Petersburg, FL, USA), 20 ng/mL human recombinant epidermal growth factor (EGF)(catalog number RKP01133, Reprokine), 10 units/mL heparin Na + salt (catalog number 3149-10KU, Sigma-Aldrich), 2% (v/v) B-27 supplement (catalog number 17504044, Gibco/Thermo Fisher Scientific), 1% (v/v) Glutamax (catalog number 35050061, Gibco/Thermo Fisher Scientific) and 1% (v/v) non-essential amino acids (catalog number 11140050, Gibco/Thermo Fisher Scientific). Differentiation into a co-culture of non-dividing cells that exhibit neuronal and astrocytic characteristics was initiated with the removal of growth factors and heparin salt from media. Cells were differentiated for 7 days with replacement of media every 2 days. Treatment was initiated on day 8 of differentiation with the addition of nanomolar media concentrations of KDC203 or oleandrin (both solubilized in DMSO) to the differentiation media. Treatment continued for 7 days during which 50% of media containing the appropriate concentration of the drug were replenished daily. Vehicle-treated cells received the same concentration of DMSO as CG-treated cells. Western Blot Analyses Cells were lysed in ice-cold buffer containing 1% NP40, 150 mM Tris-HCL (pH 8.3) and 150 mM NaCl supplemented with a protease inhibitor cocktail (catalog number 11836170001, Sigma-Aldrich) and phosSTOP phosphatase inhibitor cocktail (catalog number 4906845001, Millipore Sigma). Insoluble cellular debris was removed through a 5 min slow speed centrifugation step (2000× g), followed by a 15 min fast speed centrifugation (16,000× g). Protein concentrations of the supernatants were determined via the bicinchoninic acid assay (BCA) using the Pierce BCA Protein Assay Kit (catalog number 23225, Thermo Fisher Scientific). Following BCA-based protein concentration measurements, protein levels were adjusted to identical concentrations with cell lysis buffer. Samples for immunoblot analyses were denatured and reduced in 1x Bolt LDS sample buffer (catalog number B0007, Thermo Fisher Scientific) and 2.5% β-mercaptoethanol (catalog number M6250, Sigma-Aldrich), then heated at 95 • C for 10 min and briefly cooled on ice, immediately prior to gel loading. Proteins were separated by SDS-PAGE on 10% Bolt Bis-Tris gels (catalog number NW00105BOX, Thermo Fisher Scientific), then transferred to 0.45 micron PVDF membranes (catalog number IPVH00010, Millipore Sigma), and unspecific binding was reduced by blocking in 5% skimmed milk (catalog number SKI400, BioShop, Burlington, ON, Canada) for 1 h at room temperature. Membranes were then incubated in primary antibodies diluted in 5% skimmed milk and left overnight at 4 • C with gentle rocking. The following day, membranes were washed three times in 1x Tris-buffered saline with 0.1% Tween20 (catalog number TWN508, BioShop) (TBST), then incubated with the corresponding horseradish peroxidase (HRP)-conjugated secondary antibodies, diluted in 5% skimmed milk, and left on the rocker for 1 h at room temperature. Membranes were again washed three times in 1x TBST, then incubated with enhanced chemiluminescent (ECL) reagent (catalog number GERPN2232, GE HealthCare) for 1 min. Membranes were exposed to autoradiography film (catalog number CLMS810, MedStore, Toronto, ON, Canada) and developed via a film developer. Viability, Intracellular Ca 2+ , and ATP Content in Cells Exposed to KDC203 versus Oleandrin We used three assays for comparing the cellular health and metabolism of ReN VM cells, treated with KDC203 versus oleandrin. For each of these assays, the cells were passaged to clear-bottom 96-well tissue culture plates and differentiated for one week by the withdrawal of growth factors as described above. To initiate a cell viability assay, the cells were washed with PBS (catalog number D8537-500 ML, Sigma-Aldrich), then incubated for 20 min with 1 µM calcein-AM (catalog number 17783, Sigma-Aldrich) at 37 • C, followed by incubation in PBS supplemented with bovine serum albumin (BSA) (catalog number ALB001.50, BioShop) at 3% (w/v) to fully de-esterify AM esters. The subsequent analysis occurred in a microplate reader at excitation/emission wavelengths of 486 nm and 516 nm. To measure cellular ATP levels, CellTiter-Glo (catalog number G7570, Promega, Madison, Wisconsin, USA) was 1:4 diluted in PBS and added at a 1:1 (v/v) ratio to the growth factor-depleted cell differentiation medium covering the cells. Following a 2 min agitation at room temperature that caused the cells to lyse, the data from 1-s luminescence recordings were integrated and taken to reflect cellular ATP content. The intracellular Ca 2+ concentration was assayed for 20 min at 37 • C with 2 µM Fluo-4-AM (catalog number F14201, Thermo Fisher Scientific) in PBS, supplemented with 3% (w/v) BSA. Following a PBS wash and an additional 30 min incubation in BSA-supplemented PBS to fully de-esterify AM esters, the excitation and emission were recorded at 486 and 516 nm wavelengths in a microplate reader, respectively, and taken to reflect the intracellular Ca 2+ content. The oleandrin measurements shown here were already included in a previous report (along with results for digoxin, not shown here) [18] and were at the time generated sideby-side with the data for KDC203. In assembling this report, we chose to again include the oleandrin results, this time next to the KDC203 data, to facilitate their direct comparison. Statistical Analyses Comparisons of steady-state protein levels by Western blot analyses made use of three biological replicates. Six biological replicates for each CG concentration tested were used to determine the cellular health and metabolism in cells using the calcein-AM, CellTiter-Glo, and Fluo-4-AM assays. Biological triplicates were deployed for the statistical analyses of pharmacokinetic data comparing KDC203 and oleandrin in brain homogenates from distinct species. When several KDC203 concentrations were tested, we made the assumptions that sample populations were normally distributed and of equal variance. Consequently, one-way ANOVA-based means testing was applied to examine the null hypothesis. When rejected, post hoc two-tailed t-tests were employed in pairwise statistical comparisons of sample groups to determine p-values. Following general convention in biomedical research, one, two, and three asterisks indicate p-values of <0.05, <0.005, and <0.0005, respectively. The determination of significance was based on Bonferroni-corrected significance thresholds. The abbreviation 'ns' is shown when p-values failed to meet the significance threshold. Conclusions This work established KDC203, chemically known as 4 -dehydro-oleandrin, as a lead CG for brain applications. Relative to oleandrin, KDC203 exhibited several improvements to its pharmacological properties: it reached higher CNS concentrations, presumably because it is a lesser MDR1 substrate and exhibits lower unspecific sequestration in plasma and brain tissues. It was also less toxic than oleandrin yet exhibited similar potency in in vitro PrP C reduction assays. Our in silico work predicted additional molecules to have favorable brain penetrance. KDC203 was an attractive choice for this initial characterization, as opposed to KDC207-despite similar predicted docking and brain penetrance scores-because KDC203 can also serve as a precursor in the synthesis of KDC201-KDC206. Based on their chemical characteristics, it is predicted that this subset of compounds will be more hydrophobic than KDC203. Naturally, as hydrophobicity increases further, so does the insolubility of compounds in physiological environments, placing a practical limit on chemical substitutions that will need to be empirically evaluated. When assessing the potency of any of these compounds in vivo, it needs to be considered that wild-type mice and rats express an Atp1a1 α subunit that is far less responsive to CGs than their human ortholog. Further evaluations of the merits that KDC203 may hold for the treatment of prion diseases seem warranted.
The Wear Behavior of Textured Steel Sliding against Polymers Artificially fabricated surface textures can significantly improve the friction and wear resistance of a tribological contact. Recently, this surface texturing technique has been applied to polymer materials to improve their tribological performance. However, the wear behavior of textured tribo-pairs made of steel and polymer materials has been less thoroughly investigated and is not well understood; thus, it needs further research. The aim of this study is to investigate the wear properties of tribological contacts made of textured stainless steel against polymer surfaces. Three polymer materials were selected in this study, namely, ultrahigh molecular weight polyethylene (UHMWPE), polyoxymethylene (POM) and (polyetheretherketone) PEEK. Wear tests were operated through a ring-on-plane mode. The results revealed that the texture features and material properties affected the wear rates and friction coefficients of the textured tribo-pairs. In general, PEEK/textured steel achieved the lowest wear rate among the three types of tribo-pairs investigated. Energy dispersive x-ray spectroscopy (EDX) analysis revealed that the elements of C and O on the contacting counterfaces varied with texture features and indicated different wear behavior. Experimental and simulated results showed differences in the stress distribution around the dimple edge, which may influence wear performance. Wear debris with different surface morphologies were found for tribo-pairs with varying texture features. This study has increased the understanding of the wear behavior of tribo-pairs between textured stainless steel and polymer materials. Introduction The control of friction and wear are essential to guaranteeing that tribological systems work efficiently, reliably, and durably. Surface texturing, usually involving fabricating well-defined identical features (e.g., dimples) on tribological contacts, is an effective way to reduce friction. The most dominant effect of surface texturing is to provide an additional hydrodynamic lift to increase the load carrying capacity [1], reserve lubricant, and trap wear debris [2]. Surface texturing has been applied in thrust bearings [3,4], journal bearings [5,6], cylinder liners [7], piston rings [8], silicon carbide (SiC) [9,10] and mechanical seals [11,12] to improve the tribological performance of the mating surfaces. However, most studies [13][14][15] focused on the effects surface texturing on friction of stiff counterfaces, while few investigations have been undertaken about its influences on the wear of mechanisms due to surface texturing, especially those of soft counterfaces. Soft counterface, such as ultrahigh molecular weight polyethylene (UHMWPE), is light and has a good ability of self-lubricating and anti-wear. Counterfaces consisting of stiff and soft materials have been applied in tribological contacts such as artificial joints. Effects of surface texturing on frictional properties of soft counterfaces have attracted great interest for research. Dimples on a polydimethylsiloxane (PDMS) surface can significantly reduce friction [16][17][18]. Micro-dimples on polydimethylsiloxane (PDMS) surface can significantly reduce friction [16][17][18]. Micro-dimples on polyoxymethylene (POM) and polypropylene (PP) significantly improved the tribological performance under dry friction [19]. The friction pair consisting of textured 316 stainless steel surfaces against polytetrafluoroethylene (PTFE) presented a reduced friction coefficient [20]. Textured UHMWPE reduced friction and increased wear resistance [21,22]. Different from stiff materials, soft materials usually deform when they are subjected to forces. Thus, the tribological behavior of the counterfaces between stiff and soft materials are speculated to be different from those of stiff materials. The aims of this study are to investigate the wear behavior of textured stainless steel against polymer materials to understand wear behavior leading to better design of surface textures on soft counterfaces that prolongs the life of tribological contacts. Dimples were fabricated on 304 samples of stainless steel with varying parameters, namely, depth and area density. Wear tests were conducted with ring-on-plane testing. The friction coefficients and wear rates were measured. The worn surface morphologies of the counterfaces and wear debris were examined to assess the wear mechanism. Energy dispersive X-ray spectroscopy (EDX) was performed to investigate changes in the elements of C and O of the contacting counterfaces. Finite element analysis (FEA) was carried out to analyze the stress distribution of the mating contacts. Material and Methods Lithography-electrolytic technology was used to fabricate surface patterns on the stainless steel. The fabrication process is illustrated in Figure 1. Firstly, the steel was ground and polished (Sa ≤ 50 nm). After cleaning the samples, they were evenly coated with a layer of photoresist on the specimens' surface using spin coating. Then, through exposing and developing, a layer of photoresist mask with certain patterns was formed on the specimen surface. Finally, redundant steel was removed by electrolytic etching to fabricate surface textures. The fabricated dimples are presented in Figure 2. UHMWPE, POM, and (polyetheretherketone) PEEK are widely used polymers because of their low friction and superior mechanical properties and were studied in this work ( Table 1). The geometrical parameters of the dimples are shown in Table 2. --Friction and wear tests were carried out using a ring-on-disc configuration commercial test rig ( Figure 3). The load was set at 70 N at the rotational speed of 0.44 m/s. The reason for choosing a Friction and wear tests were carried out using a ring-on-disc configuration commercial test rig ( Figure 3). The load was set at 70 N at the rotational speed of 0.44 m/s. The reason for choosing a normal force 70 N is that polymer materials are usually soft and undergo plastic deformation under high stress. Therefore, a relatively small force was applied to the tribo-pairs. The Hertzian contact pressure (MPa) was declared, which was 0.15 MPa for non-textured tribo-pairs, 0.167 MPa, 0.188 MPa and 0.25 MPa for textured tribo-pairs with an area density of 10%, 20% and 40%. The lubricant was deionized water. The test duration was 6 hours. The surface morphologies of the mating surfaces after wear were observed using an electronic microscope (Keyence, VHX-500, Osaka, Japan) and a 3D surface profiler (Bruker, contourGT-K, Billerica, MA, USA). EDX was operated to analyze the chemical composition of the tribological contacts. After the wear tests, the lubricants were collected and filtered, and thus the wear debris were collected. The wear debris were imaged using a scanning electron microscope (Sigma 500, Carl Zeiss, Jena, Germany) to study their morphologies. The mean wear rate was determined by weighing the samples before and after each test. FEA was performed with the ANSYS software to simulate the contact situation of the dimple edge. The lower sample was set as a regular hexahedral structure. Sweep meshing method was chosen. The upper sample was set as an irregular rectangle with a cylindrical dimple. The grid cell size was set as 15 µm. The frictional contact type was chosen to simulate the contact behavior of real working conditions. The Coulomb friction coefficient was set as 0.2 in FEA. normal force 70 N is that polymer materials are usually soft and undergo plastic deformation under high stress. Therefore, a relatively small force was applied to the tribo-pairs. The Hertzian contact pressure (MPa) was declared, which was 0.15 MPa for non-textured tribo-pairs, 0.167 MPa, 0.188 MPa and 0.25 MPa for textured tribo-pairs with an area density of 10%, 20% and 40%. The lubricant was deionized water. The test duration was 6 hours. The surface morphologies of the mating surfaces after wear were observed using an electronic microscope (Keyence, VHX-500, Osaka, Japan) and a 3D surface profiler (Bruker, contourGT-K, Billerica, MA, USA). EDX was operated to analyze the chemical composition of the tribological contacts. After the wear tests, the lubricants were collected and filtered, and thus the wear debris were collected. The wear debris were imaged using a scanning electron microscope (Sigma 500, Carl Zeiss, Jena, Germany) to study their morphologies. The mean wear rate was determined by weighing the samples before and after each test. FEA was performed with the ANSYS software to simulate the contact situation of the dimple edge. The lower sample was set as a regular hexahedral structure. Sweep meshing method was chosen. The upper sample was set as an irregular rectangle with a cylindrical dimple. The grid cell size was set as 15 μm. The frictional contact type was chosen to simulate the contact behavior of real working conditions. The Coulomb friction coefficient was set as 0.2 in FEA. Results As stated above, this work was performed to understand the wear mechanisms of textured steel sliding against polymer materials. The friction properties and wear rates of non-textured and textured tribo-pairs were obtained and compared. In parallel, the surface morphologies of the counterfaces and wear debris were imaged. EDX revealed the changes in the elements C and O of the tribo-pairs after wear. Finite element simulation provided insights into the contacting conditions of the textured tribo-pairs. Figure 4 displays the friction coefficients of the tribo-pairs made of stainless steel against UHMWPE, POM, and PEEK. Figure 4a,a' presents the friction coefficients of the tribo-pairs of steel sliding against UHMWPE when the dimple depths were 10 μm and 5 μm, respectively. Compared to the untextured tribo-pair, the friction coefficient of the textured tribo-pair was increased. The tribo-pair with a dimple depth of 5 μm had a smaller friction coefficient value than did that with a dimple depth of 10 μm. Figure 4b,b' revealed the friction coefficient of POM sliding against steel. POM/textured steel showed friction-reducing effects compared to untextured ones. POM/steel (dimple depth of 5 μm, area density of 10%) had the minimum friction coefficient. Figure 4c,c' shows the friction coefficients of PEEK/steel. PEEK/textured steel had increased friction compared to non-textured ones. The tribo-pair of PEEK/steel (dimple depth of 10 μm, area density of 40%) had Results As stated above, this work was performed to understand the wear mechanisms of textured steel sliding against polymer materials. The friction properties and wear rates of non-textured and textured tribo-pairs were obtained and compared. In parallel, the surface morphologies of the counterfaces and wear debris were imaged. EDX revealed the changes in the elements C and O of the tribo-pairs after wear. Finite element simulation provided insights into the contacting conditions of the textured tribo-pairs. Figure 4a,a' presents the friction coefficients of the tribo-pairs of steel sliding against UHMWPE when the dimple depths were 10 µm and 5 µm, respectively. Compared to the untextured tribo-pair, the friction coefficient of the textured tribo-pair was increased. The tribo-pair with a dimple depth of 5 µm had a smaller friction coefficient value than did that with a dimple depth of 10 µm. Figure 4b,b' revealed the friction coefficient of POM sliding against steel. POM/textured steel showed friction-reducing effects compared to untextured ones. POM/steel (dimple depth of 5 µm, area density of 10%) had the minimum friction coefficient. Figure 4c,c' shows the friction coefficients of PEEK/steel. PEEK/textured steel had increased friction compared to non-textured ones. The tribo-pair of PEEK/steel (dimple depth of 10 µm, area density of 40%) had the maximum friction coefficient. Although POM/untextured steel revealed the highest friction among the untextured tribo-pairs, POM/textured steel presented the lowest friction. These results indicate that surface textures play an important role in the friction of the tribo-pairs studied. the maximum friction coefficient. Although POM/untextured steel revealed the highest friction among the untextured tribo-pairs, POM/textured steel presented the lowest friction. These results indicate that surface textures play an important role in the friction of the tribo-pairs studied. Figure 5 reports the wear rates of steel sliding against PEEK, POM, and UHMWPE. For untextured tribo-pairs, the wear rates decreased with the increment of the elastic modulus of the counterfaces. It can be seen that the tribological pairs of PEEK sliding against stainless steel had the lowest wear rate in the untextured and all textured conditions. Figure 5 reports the wear rates of steel sliding against PEEK, POM, and UHMWPE. For untextured tribo-pairs, the wear rates decreased with the increment of the elastic modulus of the counterfaces. It can be seen that the tribological pairs of PEEK sliding against stainless steel had the lowest wear rate in the untextured and all textured conditions. At the dimple depth of 10 μm, the UHMWPE/textured steel with dimples at an area density of 10% and 20% showed reduced wear rates compared to non-textured tribo-pairs, and this increased at a dimple area density of 40%. The POM/textured steel presented increased wear. The PEEK/textured steel with dimples with an area density of 10% decreased and increased with the increment of dimple area density compared to untextured tribo-pairs. Overall, the wear rates of UHMWPE were higher than those of PEEK but lower than those of POM. Wear Rates At the dimple depth of 5 μm, the POM/textured steel and PEEK/textured steel with dimple area densities of 10% showed reduced wear rates and increased dimples area densities of 20% and 40%, respectively. The UHMWPE/textured steel revealed increased wear at all dimple area densities. The wear rates of POM were higher than those of PEEK but lower than those of UHMWPE. Based on the results, it can be ascertained that the surface parameters, dimple depths, and dimple area densities are crucial factors in influencing the wear performance of the tribo-pairs presented. Surface Morphologies Figure 6a-d present the wear morphologies of untextured and textured steel surfaces with dimple area densities of 10%, 20%, and 40% sliding against POM after wear. Grooves and pits appeared on the untextured worn steel surface. The black areas on the untextured worn steel surface indicate adhesive wear scars that may be caused by carbonation due to high pressure and temperature. The textured steel surfaces obviously had fewer scratches, indicating that the contact conditions improved. However, the wear around the edge of the dimples was much more severe and may be caused by stress concentration. Along the sliding direction in the textured steel specimens after wear, unevenly distributed black areas were observed, which may be due to polymeric material transfer and will be further studied by EDX analysis. Figure 6a'-d' represent the surface morphologies of the mating POM counterfaces corresponding to Figure 6a-d. As the dimple area density increased, the grooves became wider and deeper, which was believed to contribute to the increase in wear rates (Figure 5a). Figure 7 displays the steel surface and the corresponding mating POM surface when the dimple area density is 10% and the dimple depth is 5 μm. At this condition, the wear rate was the lowest among the tribo-pairs of POM/steel, showing a reduced wear effect (Figure 5b) compared to untextured tribo-pairs. It can be seen that small protuberances exist on the steel around the edges of the grooves. These protuberances were possibly the transferred POM, which may benefit from the wear reduction and was further analyzed through EDX. The corresponding mating POM surface (Figure 7b) exhibited fine and shallow grooves. Protuberances were observed around the grooves. At the dimple depth of 10 µm, the UHMWPE/textured steel with dimples at an area density of 10% and 20% showed reduced wear rates compared to non-textured tribo-pairs, and this increased at a dimple area density of 40%. The POM/textured steel presented increased wear. The PEEK/textured steel with dimples with an area density of 10% decreased and increased with the increment of dimple area density compared to untextured tribo-pairs. Overall, the wear rates of UHMWPE were higher than those of PEEK but lower than those of POM. At the dimple depth of 5 µm, the POM/textured steel and PEEK/textured steel with dimple area densities of 10% showed reduced wear rates and increased dimples area densities of 20% and 40%, respectively. The UHMWPE/textured steel revealed increased wear at all dimple area densities. The wear rates of POM were higher than those of PEEK but lower than those of UHMWPE. Based on the results, it can be ascertained that the surface parameters, dimple depths, and dimple area densities are crucial factors in influencing the wear performance of the tribo-pairs presented. Surface Morphologies Figure 6a-d present the wear morphologies of untextured and textured steel surfaces with dimple area densities of 10%, 20%, and 40% sliding against POM after wear. Grooves and pits appeared on the untextured worn steel surface. The black areas on the untextured worn steel surface indicate adhesive wear scars that may be caused by carbonation due to high pressure and temperature. The textured steel surfaces obviously had fewer scratches, indicating that the contact conditions improved. However, the wear around the edge of the dimples was much more severe and may be caused by stress concentration. Along the sliding direction in the textured steel specimens after wear, unevenly distributed black areas were observed, which may be due to polymeric material transfer and will be further studied by EDX analysis. Figure 6a'-d' represent the surface morphologies of the mating POM counterfaces corresponding to Figure 6a-d. As the dimple area density increased, the grooves became wider and deeper, which was believed to contribute to the increase in wear rates (Figure 5a). Figure 7 displays the steel surface and the corresponding mating POM surface when the dimple area density is 10% and the dimple depth is 5 µm. At this condition, the wear rate was the lowest among the tribo-pairs of POM/steel, showing a reduced wear effect (Figure 5b) compared to untextured tribo-pairs. It can be seen that small protuberances exist on the steel around the edges of the grooves. These protuberances were possibly the transferred POM, which may benefit from the wear reduction and was further analyzed through EDX. The corresponding mating POM surface (Figure 7b) exhibited fine and shallow grooves. Protuberances were observed around the grooves. Figure 8a,b present SEM images of untextured steel surfaces sliding against POM after wear and a corresponding EDX analysis of the red frame area. The untextured surface exhibited many scaly protuberances. The elements C and O on the untextured surface were higher, suggesting that POM was transferred to its mating steel surface. As shown in the SEM images (Figure 8c) of the steel surface with textures of the dimple area densities of 10% and dimple depths of 5 μm, the worn surface was smooth, and a few scratches and fine humps existed. Compared to the non-textured samples, the contents of the elements of C and O decreased (Figure 8d), denoting that the adhesion and transferring of POM were weakened in the wear process. Meanwhile, on the steel surface (textures with dimple area densities of 10% and dimple depths of 10 μm (Figure 8e)), the areas around the dimples were severely worn, indicating that the tribo-pair had experienced bad wear conditions. The EDX analysis (Figure 8f) showed a higher content of element C and lower element O at the area indicated by the red frame area compared to Figure 8c,d, implying that POM underwent severe burn and material transfer during the wear process. The EDX analysis of the area (Figure 8g) revealed a lower element C and higher element O content compared to those areas in Figure 8e, suggesting a weakened burn and wear of POM. Figure 8a,b present SEM images of untextured steel surfaces sliding against POM after wear and a corresponding EDX analysis of the red frame area. The untextured surface exhibited many scaly protuberances. The elements C and O on the untextured surface were higher, suggesting that POM was transferred to its mating steel surface. As shown in the SEM images (Figure 8c) of the steel surface with textures of the dimple area densities of 10% and dimple depths of 5 µm, the worn surface was smooth, and a few scratches and fine humps existed. Compared to the non-textured samples, the contents of the elements of C and O decreased (Figure 8d), denoting that the adhesion and transferring of POM were weakened in the wear process. Meanwhile, on the steel surface (textures with dimple area densities of 10% and dimple depths of 10 µm (Figure 8e)), the areas around the dimples were severely worn, indicating that the tribo-pair had experienced bad wear conditions. The EDX analysis (Figure 8f) showed a higher content of element C and lower element O at the area indicated by the red frame area compared to Figure 8c,d, implying that POM underwent severe burn and material transfer during the wear process. The EDX analysis of the area (Figure 8g) revealed a lower element C and higher element O content compared to those areas in Figure 8e, suggesting a weakened burn and wear of POM. (Figure 9) found in the UHMWPE/untextured steel had a flake shape and a smooth surface. The wear debris ( Figure 10) presented scaly wear scars and was more irregular for UHMWPE/steel with a dimple area density of 10% and dimple depth of 10 µm and the dimples showed effects in reducing wear. The wear debris ( Figure 11) revealed a crumby shape and indicated that the material was peeling off for the UHMWPE/steel with a dimple area density of 20%. When the wear rate increased (UHMWPE/steel with dimple area density of 40% and dimple depth of 10 µm), the wear debris ( Figure 12) had become rod-shaped. Materials 2017, 10, 330 10 of 14 show the surface morphologies of the wear debris found in the tribo-pairs after wear. The wear debris (Figure 9) found in the UHMWPE/untextured steel had a flake shape and a smooth surface. The wear debris ( Figure 10) presented scaly wear scars and was more irregular for UHMWPE/steel with a dimple area density of 10% and dimple depth of 10 μm and the dimples showed effects in reducing wear. The wear debris ( Figure 11) revealed a crumby shape and indicated that the material was peeling off for the UHMWPE/steel with a dimple area density of 20%. When the wear rate increased (UHMWPE/steel with dimple area density of 40% and dimple depth of 10 μm), the wear debris ( Figure 12) had become rod-shaped. (Figure 9) found in the UHMWPE/untextured steel had a flake shape and a smooth surface. The wear debris ( Figure 10) presented scaly wear scars and was more irregular for UHMWPE/steel with a dimple area density of 10% and dimple depth of 10 μm and the dimples showed effects in reducing wear. The wear debris ( Figure 11) revealed a crumby shape and indicated that the material was peeling off for the UHMWPE/steel with a dimple area density of 20%. When the wear rate increased (UHMWPE/steel with dimple area density of 40% and dimple depth of 10 μm), the wear debris ( Figure 12) had become rod-shaped. (Figure 9) found in the UHMWPE/untextured steel had a flake shape and a smooth surface. The wear debris ( Figure 10) presented scaly wear scars and was more irregular for UHMWPE/steel with a dimple area density of 10% and dimple depth of 10 μm and the dimples showed effects in reducing wear. The wear debris ( Figure 11) revealed a crumby shape and indicated that the material was peeling off for the UHMWPE/steel with a dimple area density of 20%. When the wear rate increased (UHMWPE/steel with dimple area density of 40% and dimple depth of 10 μm), the wear debris ( Figure 12) had become rod-shaped. Simulation To further investigate wear around dimples edges, a finite element simulation was carried out in ANSYS workbench. The contact of mating between steel and POM was simplified as a dimple block sliding against a plane with solid contact (Figure 13a), where textured steel was assumed to work as the plane and POM as the block. Their surface roughness was ignored. Figure 13b,c shows a single dimple's pressure distribution chart for steel with a dimple depth of 5 μm and 10 μm, respectively. The areas of the dimples along the movement of mating surfaces exhibited a severe stress distribution, and so it can be speculated that these areas would experience more stress in the articulating process and produce much more cutting wear effects to its counterface. The simulation analysis showed that the affected area, due to the stress concentration of the dimples on the sample with a dimple depth of 5 μm (Figure 13b), was much smaller than that of a sample with dimple depth of 10 μm (Figure 13c), which probably contributed to the lower wear rate of the samples with a dimple depth of 5 μm. Simulation To further investigate wear around dimples edges, a finite element simulation was carried out in ANSYS workbench. The contact of mating between steel and POM was simplified as a dimple block sliding against a plane with solid contact (Figure 13a), where textured steel was assumed to work as the plane and POM as the block. Their surface roughness was ignored. Figure 13b,c shows a single dimple's pressure distribution chart for steel with a dimple depth of 5 µm and 10 µm, respectively. The areas of the dimples along the movement of mating surfaces exhibited a severe stress distribution, and so it can be speculated that these areas would experience more stress in the articulating process and produce much more cutting wear effects to its counterface. The simulation analysis showed that the affected area, due to the stress concentration of the dimples on the sample with a dimple depth of 5 µm (Figure 13b), was much smaller than that of a sample with dimple depth of 10 µm (Figure 13c), which probably contributed to the lower wear rate of the samples with a dimple depth of 5 µm. Simulation To further investigate wear around dimples edges, a finite element simulation was carried out in ANSYS workbench. The contact of mating between steel and POM was simplified as a dimple block sliding against a plane with solid contact (Figure 13a), where textured steel was assumed to work as the plane and POM as the block. Their surface roughness was ignored. Figure 13b,c shows a single dimple's pressure distribution chart for steel with a dimple depth of 5 μm and 10 μm, respectively. The areas of the dimples along the movement of mating surfaces exhibited a severe stress distribution, and so it can be speculated that these areas would experience more stress in the articulating process and produce much more cutting wear effects to its counterface. The simulation analysis showed that the affected area, due to the stress concentration of the dimples on the sample with a dimple depth of 5 μm (Figure 13b), was much smaller than that of a sample with dimple depth of 10 μm (Figure 13c), which probably contributed to the lower wear rate of the samples with a dimple depth of 5 μm. Discussion This paper investigated the effects of artificially fabricated surface textures on tribo-pairs of steel sliding against polymer materials. A reduction of the wear rate was observed, which can probably be attributed to the effect of the dimples on trapping wear debris. The coefficient of friction in this study indicates a mixed lubrication mode, which means that one part of the mating surface is solid contact, and hydrodynamic lubrication exerts in another area. The hydrodynamic lubrication effect may help reduce wear. The adhesive behavior and transfer of POM were weakened (Figure 8c,d), thus possibly reducing the wear rates. In this way, less production of wear debris probably resulted in an incomplete polymeric transfer layer, which caused an increase in the friction coefficient. On the other hand, an increase in the wear rate was also obtained. Increased wear may be due to the stress concentration of dimples revealed by simulated results. The results indicated that for the given tribo-pair, surface textures can affect wear rates and thus wear resistance. The optimized design of the parameters of surface textures will improve the wear properties and prolong the life of the tribological contacts of steel sliding against soft materials. It can be indicated from Figure 5 that with increasing elastic modulus, wear rates decreased for non-textured tribo-pairs. This is because high elastic modulus indicates high resistance against wear. This is why wear rate of PEEK/textured steel was less than that of UHMWPE/textured steel. Although POM is stiffer than UHMWPE, the wear rate of POM/textured steel was greater than that of UHMWPE/textured steel when dimple depth was 10 μm. The above results implied that elastic modulus of the counterpart as well as artificial manufactured surface textures affected wear performance of the tribo-pairs studied. Dimples area density and depth are two important parameters influencing tribological properties of textured tribo-pairs [2]. Take POM/textured steel for example, when dimple depth was 10 μm, with increasing dimples area density, the affected area of stress concentration increased as well (Figure 14), which probably led to the rising wear rate of POM/textured steel ( Figure 5). While dimple depth was 5 μm, stress concentration was less than the condition when dimple depth was 10 μm (Figure 13), which may be why the wear rate for POM/textured steel was smaller at dimple depth of 5 μm. Compared to POM/textured steel, UHMWPE/textured steel showed the opposite trend. Its wear rate was higher at dimple depth of 5 μm than that at dimple depth of 10 μm, which was probably due to the difference of material properties between POM and UHMWPE. Correspondingly, the lowest wear rate for POM tribo-pair was at dimple depth of 5 μm and dimples area density of 10%, while that for UHMWPE tribo-pair was at dimple depth of 10 μm and dimples area density of 20%. Discussion This paper investigated the effects of artificially fabricated surface textures on tribo-pairs of steel sliding against polymer materials. A reduction of the wear rate was observed, which can probably be attributed to the effect of the dimples on trapping wear debris. The coefficient of friction in this study indicates a mixed lubrication mode, which means that one part of the mating surface is solid contact, and hydrodynamic lubrication exerts in another area. The hydrodynamic lubrication effect may help reduce wear. The adhesive behavior and transfer of POM were weakened (Figure 8c,d), thus possibly reducing the wear rates. In this way, less production of wear debris probably resulted in an incomplete polymeric transfer layer, which caused an increase in the friction coefficient. On the other hand, an increase in the wear rate was also obtained. Increased wear may be due to the stress concentration of dimples revealed by simulated results. The results indicated that for the given tribo-pair, surface textures can affect wear rates and thus wear resistance. The optimized design of the parameters of surface textures will improve the wear properties and prolong the life of the tribological contacts of steel sliding against soft materials. It can be indicated from Figure 5 that with increasing elastic modulus, wear rates decreased for non-textured tribo-pairs. This is because high elastic modulus indicates high resistance against wear. This is why wear rate of PEEK/textured steel was less than that of UHMWPE/textured steel. Although POM is stiffer than UHMWPE, the wear rate of POM/textured steel was greater than that of UHMWPE/textured steel when dimple depth was 10 µm. The above results implied that elastic modulus of the counterpart as well as artificial manufactured surface textures affected wear performance of the tribo-pairs studied. Dimples area density and depth are two important parameters influencing tribological properties of textured tribo-pairs [2]. Take POM/textured steel for example, when dimple depth was 10 µm, with increasing dimples area density, the affected area of stress concentration increased as well (Figure 14), which probably led to the rising wear rate of POM/textured steel ( Figure 5). While dimple depth was 5 µm, stress concentration was less than the condition when dimple depth was 10 µm (Figure 13), which may be why the wear rate for POM/textured steel was smaller at dimple depth of 5 µm. Compared to POM/textured steel, UHMWPE/textured steel showed the opposite trend. Its wear rate was higher at dimple depth of 5 µm than that at dimple depth of 10 µm, which was probably due to the difference of material properties between POM and UHMWPE. Correspondingly, the lowest wear rate for POM tribo-pair was at dimple depth of 5 µm and dimples area density of 10%, while that for UHMWPE tribo-pair was at dimple depth of 10 µm and dimples area density of 20%. Surface morphology analysis has been a commonly used method to inspect tribological contact conditions. The adhesive wear and burn of polymer materials occurred in the articulating process indicated by the surface morphologies (Figure 6a). Surface morphologies after wear had few scratches compared to untextured specimens (Figure 8a), which corresponds to the reduced wear rates of specimens with dimple area densities of 10% and 20% compared to untextured ones (Figure 5b). On the other hand, much severe wear around the edges of the dimples (Figure 6b,c) was observed, which likely was caused by stress concentration. FEA (Figures 13 and 14) revealed that surface textures possibly caused the stress concentration around the edges of the dimples and thus resulted in stronger forces on the soft counterfaces. For the non-textured tribo-pairs, the steel surface was covered by a large area of polymer transfer film (Figure 8a,b), while the wear debris presented flake-like morphologies (Figure 9), which may be peeled off from the complete transfer film. As the number of dimples increased corresponding to the dimple area density, the affected areas increased as revealed by FEA ( Figure 14). The strengthened affected areas worked like blades between the articulating surfaces. When there were more blades, the cutting effects were stronger, and thus the wear debris tended to reveal micro-cutting morphologies ( Figure 12). For the wear-reduced condition, the wear debris were smaller and lumpy. When wear increased, the wear debris were rod-shaped or twisted due to the stronger cutting effects involved. The wear debris had different surface morphologies at varying tribo-pairs, indicating different wear mechanisms. Conclusions This study has investigated the wear behavior of friction pairs consisted of soft materials, which are less studied in current literature. A thorough understanding of wear behavior of the soft counterfaces is achieved. Based on the analyzed techniques in the study, it can be concluded that wear behavior relates to the elastic modulus of soft materials and parameters of surface textures. Polymeric material transfer occurred, which may participate in adhesive wear. The wear debris found in the tribo-pairs with varying surface textures present different surface morphologies. Non-textured tribo-pair revealed flaky and smooth wear debris. With reduced wear, the wear debris tended to be lumpy while at increased wear, the wear debris tended to be rod-shaped or twisted. Surface morphology analysis has been a commonly used method to inspect tribological contact conditions. The adhesive wear and burn of polymer materials occurred in the articulating process indicated by the surface morphologies ( Figure 6a). Surface morphologies after wear had few scratches compared to untextured specimens (Figure 8a), which corresponds to the reduced wear rates of specimens with dimple area densities of 10% and 20% compared to untextured ones (Figure 5b). On the other hand, much severe wear around the edges of the dimples (Figure 6b,c) was observed, which likely was caused by stress concentration. FEA (Figures 13 and 14) revealed that surface textures possibly caused the stress concentration around the edges of the dimples and thus resulted in stronger forces on the soft counterfaces. For the non-textured tribo-pairs, the steel surface was covered by a large area of polymer transfer film (Figure 8a,b), while the wear debris presented flake-like morphologies (Figure 9), which may be peeled off from the complete transfer film. As the number of dimples increased corresponding to the dimple area density, the affected areas increased as revealed by FEA ( Figure 14). The strengthened affected areas worked like blades between the articulating surfaces. When there were more blades, the cutting effects were stronger, and thus the wear debris tended to reveal micro-cutting morphologies ( Figure 12). For the wear-reduced condition, the wear debris were smaller and lumpy. When wear increased, the wear debris were rod-shaped or twisted due to the stronger cutting effects involved. The wear debris had different surface morphologies at varying tribo-pairs, indicating different wear mechanisms. Conclusions This study has investigated the wear behavior of friction pairs consisted of soft materials, which are less studied in current literature. A thorough understanding of wear behavior of the soft counterfaces is achieved. Based on the analyzed techniques in the study, it can be concluded that wear behavior relates to the elastic modulus of soft materials and parameters of surface textures. Polymeric material transfer occurred, which may participate in adhesive wear. The wear debris found in the tribo-pairs with varying surface textures present different surface morphologies. Non-textured tribo-pair revealed flaky and smooth wear debris. With reduced wear, the wear debris tended to be lumpy while at increased wear, the wear debris tended to be rod-shaped or twisted.
Anti-Growth, Anti-Angiogenic, and Pro-Apoptotic Effects by CX-4945, an Inhibitor of Casein Kinase 2, on HuCCT-1 Human Cholangiocarcinoma Cells via Control of Caspase-9/3, DR-4, STAT-3/STAT-5, Mcl-1, eIF-2α, and HIF-1α Overexpression of casein kinase 2 (CK2) has an oncogenic and pro-survival role in many cancers. CX-4945 (Silmitasertib) is a CK2 inhibitor with anti-cancerous and anti-angiogenic effects. Up to date, the anti-cancer effect and mechanism of CX-4945 on human cholangiocarcinoma (CCA) remain unclear. This study investigated whether CX-4945 inhibits growth and induces apoptosis of HuCCT-1 cells, a human CCA cell line. Of note, treatment with CX-4945 at 20 μM markedly reduced survival and induced apoptosis of HuCCT-1 cells, as evidenced by nuclear DNA fragmentation, PARP cleavage, activation of caspase-9/3, and up-regulation of DR-4. Although CX-4945 did not affect the phosphorylation and expression of CK2, it vastly inhibited the phosphorylation of CK2 substrates, supporting the drug’s efficacy in inhibiting CK2 and its downstream pathway. Importantly, knockdown of CK2 that partially suppressed the phosphorylation of CK2 substrates resulted in a significant reduction of HuCCT-1 cell survival. In addition, CX-4945 reduced the phosphorylation and expression of STAT-3 and STAT-5 in HuCCT-1 cells, and pharmacological inhibition or respective knockdown of these proteins resulted in significant growth suppression of HuCCT-1 cells. CX-4945 also had abilities to decrease Mcl-1 expression while increasing eIF-2α phosphorylation in HuCCT-1 cells. Furthermore, there was a time-differential negative regulation of HIF-1α expression by CX-4945 in HuCCT-1 cells, and knockdown of HIF-1α caused a significant reduction of the cell survival. In summary, these results demonstrated that CX-4945 has anti-growth, anti-angiogenic, and pro-apoptotic effects on HuCCT-1 cells, which are mediated through control of CK2, caspase-9/3, DR-4, STAT-3/5, Mcl-1, eIF-2α, and HIF-1α. Introduction Cholangiocarcinoma (CCA), which forms in the bile ducts, currently accounts for~15% of hepatobiliary cancers and~3% of gastrointestinal malignancies [1,2], and men have a slightly higher incidence of CCA and mortality from cancer than women [3]. Although surgery, curative liver transplantation, and radiation therapy are options for patients with cholangiocarcinoma [4], the 5-year survival rate is around 40% or less even after curative surgery [4,5]. Thus, there remains a great need for more effective drugs to enhance locoregional disease control and overall survival in cholangiocarcinoma patients. Casein kinase 2 (CK2) is a serine/threonine protein kinase that phosphorylates many intracellular protein substrates in normal and cancer cells. Accordingly, de-regulation of CK2 has been linked to tumorigenesis as a potential protection mechanism for cancer cells [6][7][8]. Of note, there is accumulating evidence that CK2 is overexpressed in CCA [9], and there is a significant association of CK2 overexpression with progression and prognosis of CCA [10]. The existing body of research on CK2 thus strongly suggests CK2 inhibition as a targeted therapy for CCA. CX-4945 Inhibits Growth and Induces Apoptosis of HuCCT-1 Cells We initially investigated the effect of CX-4945 at different concentrations (5,10, and 20 µM) for 24 h on the survival of HuCCT-1 cells by using cell count analysis. As shown in Figure 1A, CX-4945 induced a concentration-dependent reduction of HuCCT-1 cell survival compared with control cells. To know whether the CX-4945's anti-survival effect is only limited to HuCCT-1 cells, we additionally examined the effect of CX-4945 at different concentrations on the growth of SNU-1196, another human CCA cell line. Similarly, data from cell count assay revealed the ability of CX-4945 to reduce the survival of SNU-1196 cells in a dose-dependent fashion (Supplementary Figure S1A). The CX-4945's growthsuppressive effect on SNU1196 cells was also confirmed by a phase-contrast microscope (Supplementary Figure S1B). To further see the specificity, we next tested the effect of CX-4945 at 5, 10, and 20 µM for 24 h on the survival of normal HDFs. As shown in Figure 1B, CX-4945 at the doses tested was not cytotoxic to HDFs. Using a clonogenic assay, we next sought to explore whether CX-4945 inhibits the survival and proliferation of HuCCT-1 cells. As shown in Figure 1C, there was a markedly diminished colony formation of HuCCT-1 cells treated with CX-4945 at 20 µM for 2 weeks. Densitometric data of Figure 1C is shown as Figure 1D. Next, whether CX-4945 induces apoptosis of HuCCT-1 cells was determined by measuring nuclear DNA fragmentation, a hallmark of apoptosis. As shown in Figure 1D, CX-4945 treatment at 10 or 20 µM for 24 h resulted in an apparent nuclear DNA fragmentation in HuCCT-1 cells. The chemical structure of CX-4945 is shown in Figure 1E. These results demonstrated that CX-4945 at 20 µM has solid anti-survival and pro-apoptotic effects on HuCCT-1 cells. Because of its strong growth-suppressive and apoptosis-inducing effects on HuCCT-1 cells, this 20 µM concentration of CX-4945 was chosen for further studies. To understand molecular mechanisms by which CX-4945 inhibits growth and induces apoptosis of HuCCT-1 cells, we next carried out time course experiments to know whether CX-4945 modulates the expression of apoptosis-related proteins, including PARP, caspases, and death receptors (DRs), in HuCCT-1 cells. In this study, the ability of CX-4945 to induce activation of caspase-9/3 in HuCCT-1 cells was assessed by measuring not only decreased expression levels of procaspase-9/3 but increased expression levels of cleaved (active) caspase-9/3 but also generation of cleaved PARP, which is mediated by active caspases. As shown in Figure 2, treatment with CX-4945 at 24 h resulted in high levels of cleaved PARP in HuCCT-1 cells compared with control cells. Moreover, CX-4945 treatment led to increased expression levels of cleaved caspase-9 at 4 h while decreasing expression levels of procaspase-3 at 24 h in HuCCT-1 cells. In addition, there was a slight elevation of DR-4 protein expression in HuCCT-1 cells treated with CX-4945 at 8 and 24 h. Expression levels of control actin protein remained constant under these experimental conditions. CX-4945 is a pharmacological inhibitor of CK2α [11]. Targeting CK2 has been proposed for the treatment of CCA [13]. Given that CK2 is a constitutively expressed and active Ser/Thr protein kinase that phosphorylates hundreds of protein substrates, we next asked whether CK2α is expressed and phosphorylated and has kinase activity in HuCCT-1 cells. As shown in Figure 3A, in the absence of CX-4945, results of the kinetic study demonstrated that there were substantial levels of phosphorylated CK2α at 2, 4, and 8 h, followed by a significant decline at 24 h (upper panel). There were also high total expression levels of CK2α at 2, 4, and 8 h, followed by a substantial reduction at 24 h (lower panel). Furthermore, as shown in Figure 3B, of interest, there were high levels of phosphorylated CK2 substrates at 2, 4, and 8 h, followed by a big decrease at 24 h (upper panel). As expected, CX-4945 did not alter the expression and phosphorylation levels of CK2α in HuCCT-1 cells ( Figure 3A). Still, the drug vastly reduced levels of phosphorylated CK2 substrates in HuCCT-1 cells at times tested ( Figure 3B). These results point out that CK2α is constitutively expressed/phosphorylated and has the kinase activity in HuCCT-1 cells, and CX-4945 strongly inhibits CK2α activity and its downstream pathway without affecting its protein expression levels in the cells. Control actin protein levels remained unchanged under these experimental conditions. We next sought to explore the role of reduced CK2α activity in the CX-4945's anti-survival effect on HuCCT-1 cells by using CK2α siRNA transfection. As shown in Figure 3C, there were fewer protein expression levels of CK2α in the CK2α siRNA-transfected HuCCT-1 cells than those in control siRNA-transfected ones (upper panel). In addition, there were fewer levels of phosphorylated CK2 substrates in CK2α siRNA-transfected HuCCT-1 cells than those in control siRNA-transfected ones (middle panel). Control actin protein levels remained constant under these experimental conditions (lower panel). Importantly, as shown in Figure 3D, data of cell count analysis showed that knockdown of CK2α led to a significant reduction of HuCCT-1 cell survival. Evidence suggests a role of STATs in cancer cell survival [14,15] and apoptosis [16]. To date, little is known about the expression and phosphorylation (activation) of STATs in CCA cells. We thus checked whether STAT-3 and STAT-5, two critical members of the STATs family, are expressed and phosphorylated in HuCCT-1 cells. Notably, as shown in Figure 4A, there were substantial expression and phosphorylation levels of STAT-3 and STAT-5 in HuCCT-1 cells at times tested. Maximal phosphorylation levels of STAT-3 and STAT-5 were seen at 24 h. However, CX-4945 treatment, particularly at 24 h, led to a substantial reduction of the phosphorylation and expression levels of STAT-3 protein in HuCCT-1 cells. In addition, CX-4945 treatment at 24 h strongly down-regulated STAT-5 protein phosphorylation levels, but it slightly reduced the protein expression levels in HuCCT-1 cells. Results of Western blotting from triplicate experiments, as shown in Figure 4B, further confirmed the ability of CX-4945 to significantly inhibit not only the phosphorylation and expression of STAT-3 but also the phosphorylation of STAT-5 in HuCCT-1 cells. Using the RT-PCR experiment, we further tested whether the reduced STAT-3 protein expression by CX-4945 was due to decreased STAT-3 transcripts in HuCCT-1 cells. As shown in Figure 4C, data of RT-PCR analysis from triplicate experiments demonstrated that CX-4945 significantly down-regulates STAT-3 mRNA expression in HuCCT-1 cells. Densitometric data of Figure 4B,C for the protein phosphorylation and expression levels of STAT-3 and STAT-5 normalized to those of control actin or total STAT-5, and the mRNA expression levels of STAT-3 normalized to those of control actin are shown in Figure 4D,E respectively. Data are mean ± SE of three independent experiments. * p < 0.05 compared to vehicle control. (E) Densitometry analysis of (C). Data are mean ± SE of three independent experiments. * p < 0.05 compared to vehicle control. Pharmacological Inhibition or Respective Knockdown of STAT-3 and STAT-5 Leads to Reduction of HuCCT-1 Cell Survival Using AG490, a pan-inhibitor of STATs, we next investigated the role of reduced expression and activity of STAT-3 and STAT-5 in the CX-4945-induced growth suppression of HuCCT-1 cells. As shown in Figure 5A, treatment with AG490 for 4 h concentrationdependently inhibited the phosphorylation of STAT-3 and STAT-5 without affecting their total expression levels in HuCCT-1 cells, pointing out the drug efficacy. Notably, AG490 treatment for 24 h further resulted in a significant reduction of HuCCT-1 cell survival in a dose-dependent manner ( Figure 5B). We next performed a siRNA transfection experiment to directly see the role of STAT-3 or STAT-5 in HuCCT-1 cell survival. Results of gene silencing demonstrated that compared with control siRNA-transfected HuCCT-1 cells, there were much fewer expression levels of STAT-3 or STAT-5 in STAT-3 or STAT-5 siRNAtransfected cells ( Figure 5C,E), and knockdown of STAT-3 or STAT-5 caused a significant reduction of HuCCT-1 cell survival ( Figure 5D,F). We next examined whether CX-4945 affects the expression of the Bcl-2 family, antiapoptotic proteins [17], in HuCCT-1 cells. Interestingly, as shown in Figure 6A, data of time course works showed high protein expression levels of Mcl-1 at 2 h but a sharp decline of the protein levels after that in HuCCT-1 cells. Of note, treatment with CX-4945 led to a substantial reduction of the protein expression levels of Mcl-1 in HuCCT-1 cells at times tested. Control actin protein levels remained constant under these experimental conditions. We next sought to explore whether Mcl-1 protein down-regulation by CX-4945 was due to reducing Mcl-1 transcripts in HuCCT-1 cells. Distinctly, CX-4945 treatment did not affect Mcl-1 mRNA expression in HuCCT-1 cells at times tested ( Figure 6B). Results of Western blot and RT-PCR analysis from triplicate experiments, as shown in Figure 6C,D, further revealed the ability of CX-4945 to significantly inhibit the protein expression of Mcl-1 with no change in Mcl-1 mRNA expression in HuCCT-1 cells. Densitometric data of Figure 6C for the protein expression levels and Figure 6D for mRNA expression levels of Mcl-1 normalized to those of control actin are shown in Figure 6E,F. We then examined whether CX-4945 affects the expression and phosphorylation of eIF-2α, a translation-related protein [18], in HuCCT-1 cells. As shown in Figure 6G, there was a time-dependent increase in levels of phosphorylated eIF-2α protein in HuCCT-1 cells. Maximal phosphorylation of eIF-2α was seen at 24 h. However, of interest, treatment with CX-4945 further augmented levels of phosphorylated eIF-2α without influencing the protein expression levels in HuCCT-1 cells at times tested. Results of Western blot analysis from triplicate experiments, as shown in Figure 6H, further demonstrated the ability of CX-4945 to significantly elevate the phosphorylation of eIF-2α without altering the protein expression levels in these cells. Densitometric data of Figure 6H for the phosphorylation levels of eIF-2α normalized to those of the protein's total expression levels is shown in Figure 6I. HIF-1α is an angiogenic transcription factor [19], and its overexpression is partially linked to CCA survival and metastasis [20]. This led us to test whether HIF-1α is expressed in HuCCT-1 cells and whether CX-4945 regulates it. As shown in Figure 8A, results of time course work illustrated sustained protein expression levels of HIF-1α in HuCCT-1 cells at times tested. Similarly, there were also sustained protein expression levels of HIF-1β in these cells for the times applied. Strikingly, treatment with CX-4945 at 2 h caused a complete loss of HIF-1α protein with no change of HIF-1β protein in HuCCT-1 cells. However, CX-4945 treatment at 4, 8, and 24 h resulted in a time-dependent reduction of HIF-1α and HIF-1β proteins in HuCCT-1 cells. Control actin protein levels remained constant under these experimental conditions ( Figure 8A). Strikingly, as shown in Figure 8B, data of RT-PCR analysis treatment with CX-4945 at 2 h did not affect the mRNA expression levels of HIF-1α and HIF-1β in HuCCT-1 cells. However, CX-4945 treatment at 4 h and after that also led to a time-dependent down-regulation of HIF-1α and HIF-1β transcripts in these cells. Using HIF-1α siRNA transfection, we next sought to explore the role of HIF-1α down-regulation in the CX-4945 s anti-survival effect on HuCCT-1 cells. As shown in Figure 8C, there was a complete loss of HIF-1α in HIF-1α siRNA-transfected HuCCT-1 cells compared with that in control siRNA-transfected ones. Control actin protein levels remained unchanged under these experimental conditions. Of importance, data of cell count analysis demonstrated that knockdown of HIF-1α led to a significant reduction of HuCCT-1 cell survival ( Figure 8D). Discussion CK2 is a constitutively expressed and active protein kinase with a long history as a pro-survival and anti-apoptotic kinase. Given the widespread overexpression of CK2 in multiple cancers, a selective inhibitor of CK2 is an attractive targeted approach to treating cancer. Although several inhibitors of CK2 have been discovered in the last 20 years, only CX-4945 has entered into clinical trials as a potential anti-cancer drug. Notably, preclinical in vitro and in vivo evidence of an anti-cancer effect of CX-4945 alone or the combination of gemcitabine and/or cisplatin in CCA has been reported [12,21]. However, up to date, the anti-cancer effect and mode of action of CX-4945 in CCA are not fully understood. Here we show that CX-4945 has anti-survival, pro-apoptotic, and anti-angiogenic effects, and these effects are mediated through control of multiple targets, including CK2, caspase-9/3, DR-4, STAT-3/5, eIF-2α, Mcl-1, and HIF-1α. It is known that CX-4945 has anti-tumor activity against many human solid cancer cells, such as gastric [22], breast [23], pancreatic [24], and hematologic [25]. Of interest, it has been recently shown that CX-4945 has an anti-proliferative effect on HuCCT-1 cells and the apoptosis-inducing effect on mouse xenograft model via regulation of CK2, caspase-3/7, PKB, and DNA-repairing enzymes [12]. We also have demonstrated that CX-4945 (10 or 20 µM) strongly inhibits the growth of HuCCT-1 cells, but it is not cytotoxic to normal cells, addressing the drug's selectivity to inhibit CCA cells. Cancer cells undergoing apoptosis have several distinct biochemical characteristics, including nuclear DNA fragmentation and cleavage of PARP [26]. Thus, assuming the present findings that CX-4945 induces nuclear DNA fragmentation and PARP cleavage in HuCCT-1 cells, it is evident that this CK2 inhibitor also induces apoptosis of HuCCT-1 cells. Reportedly, apoptosis induction is mainly initiated from different entry points, for example, at the plasma membrane upon ligation of DRs (extrinsic pathway) or the mitochondria (intrinsic pathway) [26]. Given that CX-4945 increases expression levels of cleaved caspase-9 while decreasing those of procaspase-3 and it concomitantly up-regulates DR-4 in HuCCT-1 cells, it is likely that the CX-4945-induced apoptosis of HuCCT-1 cells is dependent on both the caspase-dependent intrinsic pathway and the DR-mediated extrinsic pathway. CK2 is a tetrameric enzyme composed of 2 catalytic (α and or α') subunits and 2 regulatory (β) subunits [27]. CK2 is reported to phosphorylate a hundred protein substrates in cells and play a crucial role in cancer cells' proliferation, survival, and malignant phenotype, including CCA [10,28]. Although CK2β expression in CCA cells and its survival role have been previously reported [29], little is known about the expression and phosphorylation and function of CK2α in CCA cells. In the current study, we have demonstrated that CK2α is expressed and phosphorylated, and has kinase activity, as evidenced by an elevated level of phosphorylated CK2 substrates in HuCCT-1 cells. In the current study, CX-4945 does not alter the expression and phosphorylation of CK2α. Still, it vastly lowers levels of phosphorylated CK2 substrates in HuCCT-1 cells, pointing out the drug's ability to selectively inhibit the kinase activity of CK2. Importantly, results of gene silencing herein have revealed that knockdown of CK2α that substantially decreases levels of phosphorylated CK2 substrates in HuCCT-1 cells further leads to a significant reduction of cell survival. These results strongly suggest that CK2α is a survival factor in HuCCT-1 cells. and the CX-4945 s anti-survival effect on HuCCT-1 cells is mediated through inhibition of CK2α and its downstream signaling pathway(s). The transcription factor STAT-3 is overexpressed and plays oncogenic roles in many cancers [30]. It is further noted that STAT-3 contributes to CCA carcinogenesis and progression and may serve as a marker for a poor prognosis of CCA [31]. This study shows that the expression and phosphorylation of STAT-3 and STAT-5, another member of the STATs family, are detected in HuCCT-1 cells. Until now, CX-4945 regulation of STAT-3 and STAT-5 in HuCCT-1 cells and the role of STAT-3 and STAT-5 are not fully defined. Distinctly, we have demonstrated that CX-4945 inhibits both the phosphorylation and expression of STAT-3, but it blocks only the phosphorylation of STAT-5 in HuCCT-1 cells. These results indicate that CX-4945 inhibits both STAT-3 and STAT-5 in HuCCT-1 cells. The drug's inhibitory effect on STAT-3 and STAT-5 is due to the former's transcriptional repression and the latter's dephosphorylation at the protein (post-translational) level, respectively. Further, assuming the present results that pharmacological inhibition or respective knockdown of STAT-3 and STAT-5 in HuCCT-1 cells leads to a significant reduction of HuCCT-1 cell growth, it is likely that inhibition of STAT-3 and STAT-5 by CX-4945 contributes to the drug's anti-survival effect on these cells. Previously, CK2 regulation of STAT-3 in hematological malignances has been reported [32,33]. However, data of gene silencing of CK2α herein illustrates that knockdown of CK2α does not influence the phosphorylation and expression of STAT-3 and STAT-5 in HuCCT-1 cells. These results indicate that CK2α does not lie upstream of STAT-3 and STAT-5 in HuCCT-1 cells, and the CX-4945-induced inhibition of STAT-3 and STAT-5 in these cells is mediated not through inhibition of CK2α but regulation of another factor(s) or pathway(s). It will be interesting to examine, in the future, which kinase(s) or factor(s) regulate the phosphorylation and expression of STAT-3 and STAT-5 in HuCCT-1 cells in response to CX-4945 exposure by using kinomics or RNA sequencing approach. Mcl-1 is an anti-apoptotic protein of the Bcl-2 family [34]. Overexpression of Mcl-1 is frequently detected in many tumors and is closely associated with tumorigenesis, poor prognosis, and drug resistance [35]. In this study, Mcl-1 is expressed in HuCCT-1 cells, and CX-4945 vastly down-regulates Mcl-1 at levels of protein, but not mRNA. These results point out that CX-4945-induced Mcl-1 down-regulation in HuCCT-1 cells is due to decreased protein synthesis or increased protein turnover. It is thus likely that the loss of Mcl-1 may further contribute to the drug's anti-survival and/or pro-apoptotic effects. In this study, we also have hypothesized that CX-4945 might exert its anti-survival and pro-apoptotic effects through the regulation of additional pathways (components). A wealth of information indicates that many anti-cancer drugs or agents induce ER stress and/or inhibit protein synthesis (translation) in cancer cells, which are crucial for their antisurvival and/or pro-apoptotic effects. eIF-2α, glucose-regulated protein 78 (GRP78), and activating transcription factor 4 (ATF4) are known ER stress and translation-related markers. We thus have investigated in this study to see whether CX-4945 alters the expression and phosphorylation levels of eIF-2α, GRP78, and ATF4 in HuCCT-1 cells. Distinctly, the present study has revealed that CX-4945 elevates eIF-2α phosphorylation but has no effects on expression levels of GRP78 and ATF4 (Supplementary Figure S2) in HuCCT-1 cells. eIF-2α is a protein that controls the translation, and thus its expression and (de)phosphorylation status greatly influence cancer cell growth and survival [36]. It is documented that the nonphosphorylated eIF-2α is active while the phosphorylated one is inactive in the translation initiation complex formation. Accordingly, eIF-2α hyperphosphorylation in response to environmental stress or drug exposure reduces global translation [37]. Of note, previous studies have demonstrated the CX-4945-induced eIF-2α hyperphosphorylation in eye cells [38] and leukemia cells [39]. In agreement with this, we have shown the capability of CX-4945 to elevate the phosphorylation of eIF-2α in HuCCT-1 cells. These results may thus imply that CX-4945 elicits eIF-2α hyperphosphorylation-dependent translational inhibition in HuCCT-1 cells, which may facilitate the drug's cytotoxic effects. Given that knockdown of CK2α also leads to increased eIF-2α phosphorylation in HuCCT-1 cells in this study, it is further speculative that CK2α is a protein kinase that phosphorylates eIF-2α in these cells in response to CX-4945 exposure. HIF-1 is a heterodimer composed of HIF-1α and HIF-1β subunits, and because of its crucial role in tumor angiogenesis, it has been recognized as an important cancer drug target [40]. Indeed, numerous studies demonstrate a strong correlation between HIF-1 overexpression and tumor metastasis, angiogenesis, and cancer resistance therapy [41]. Of interest, there is evidence that HIF-1α is expressed and plays a vital role in CCA [42]. At present, the expression and regulation of HIF-1α in CCA remain unclear. In addition, the CX-4945 regulation of HIF-1α in CCA is not fully understood. In this study, we have observed that HuCCT-1 cells express both HIF-1α and HIF-1β. Strikingly, while the shortterm (2 h) treatment with CX-4945 only down-regulates HIF-1α at the protein, but not mRNA, levels, the long-term (4 to 24 h) treatment with this CK2 inhibitor results in a marked down-regulation of both HIF-1α and HIF-1β at their protein and mRNA levels. These results point out that CX-4945 time-differentially down-regulates HIF-1α in HuCCT-1 cells and the short-term administration of this drug may be effective in achieving a significant loss of HIF-1α at the protein levels. Given that HIF-1α is a tumor angiogenic factor, the present study suggests that CX-4945 may have a potential anti-angiogenic effect in HuCCT-1 cells. However, the only evidence of a possible anti-angiogenic effect of CX-4945 found in this study is the down-regulation of HIF-1α. Thus, to claim the anti-angiogenic result of CX-4945, it will be necessary for the future to examine whether CX-4945 can inhibit tube formation in the phorbol-12-myristate-13-acetate-treated human umbilical vascular endothelial cells, a well-established functional assay to claim the drug's anti-angiogenic effect in culture. Further, considering that knockdown of HIF-1α leads to a significant reduction of HuCCT-1 cell survival herein, it is likely that HIF-1α may act as a survival factor in HuCCT-1 cells. Thus, it appears that the loss of HIF-1α may further contribute to the CX-4945 s anti-survival effect on HuCCT-1 cells. It is proposed that a likely scenario of possible molecular and cellular mechanisms underlying the CX-4945 s anti-survival and pro-apoptotic effects on HuCCT-1 cells herein is that (1) In summary, we demonstrate that CX-4945 has anti-growth, anti-angiogenic, and pro-apoptotic effects on HuCCT-1 cells, mediated by controlling the expression, activation, and phosphorylation of multiple targets, including CK2, caspase-9/3, DR-4, STAT-3/5, eIF-2α, Mcl-1, and HIF-1α. This work suggests that CX-4945 could be a promising drug for treating human CCA. Table S1. Western Bright TM Enhanced chemiluminescence (ECL, cat. no. K-12045-D20) was purchased from Advansta Corporation (San Jose, CA, USA). All the plasticwares used for cell culture were obtained from SPL Life Sciences (Gyeonggi-do, Korea). Cell Count Analysis HuCCT-1 cells were seeded in a density of 0.3 × 10 5 cells/500 µL/well in 24-well plate overnight. Cells were treated with vehicle control (DMSO; 0.1%), CX-4945 or AG490 at the indicated concentrations and times. At each time point, the number of surviving cells, based on the principle that live cells have intact cell membrane which cannot be stained with trypan blue dye (0.4%, cat. no. 15250-061, Gibco, Grand Island, NY, USA), were counted using a phase-contrast microscope. Approximately 100 cells were counted for each evaluation. The cell survival is expressed as a percentage of control. Phase-contrast images of the conditioned cells were also taken with a compound microscope (Nikon Eclipse TS200, Nikon Corp., Tokyo, Japan). Colony Formation Assay HuCCT-1 cells were seeded at a density of 200 cells/0.5 mL/well in 24-well plate the day before treatment. Cells were treated with vehicle control (DMSO) or CX-4945 (20 µM) for two weeks. Colonies were fixed with 100% methanol and stained with 0.5% crystal violet [43]. Measurement of Genomic DNA Fragmentation Measurement of genomic DNA fragmentation was conducted as previously described [44]. Briefly, HuCCT-1 cells were seeded at a density of 2.1 × 10 6 cells/7 mL/100 mm plate the day before treatment. Cells were treated with vehicle control (DMSO) or CX-4945 at 5, 10, and 20 µM for 24 h. The conditioned cells were harvested, washed, and lysed in a buffer [50 mM Tris (pH 8.0), 0.5% sarkosyl, 0.5 mg/mL proteinase K, and 1 mM EDTA] at 55 • C for 3 h. Subsequently, RNase A at 0.5 µg/mL was added into the cell lysate and the cell lysate was further incubated at 55 • C for 18 h. The cell lysate was then centrifuged at 10,000× g at 4 • C for 20 min (min). Genomic DNA was obtained from the cell lysate by extracting with equal volume of neutral phenol-chloroform-isoamyl alcohol mixture (25:24:1) and analyzed via electrophoresis at 100 V on a 1.8% agarose gel for 20 min. The DNA was visualized and photographed under UV illumination after staining with ethidium bromide (0.1%, Sigma-Aldrich; Merck; St. Louis, MO, USA) using a gel documentation system (Gel Doc-XR, Bio-Rad Laboratories, Inc., Hercules, CA, USA). Immunoblot Analysis An equal amount of proteins (40 µg) was separated via SDS-PAGE and transferred onto polyvinylidene fluoride (PVDF) membrane (Millipore; Billerica, MA, USA) by electroplating. The membranes were washed with Tris-buffered saline (TBS) (10 mM Tris, 150 mM NaCl, pH 7.5) supplemented with 0.05% (vol/vol) Tween 20 (TBS-T) followed by blocking with TBS-T containing 5% (wt/vol) non-fat dried milk. The membranes were probed overnight with primary antibodies at 4 • C, followed by incubation with secondary antibodies conjugated to horseradish peroxidase at room temperature for 2 h. The membranes were washed, and immune reactivities were detected using enhanced chemiluminescence reagents (cat. no. K 12045-D50; Advansta, Inc.). Equal protein loading was assessed by expression levels of control β-actin protein. Small Interfering RNA (siRNA) Transfection HuCCT-1 cells were seeded at a density of 1 × 10 5 cells/2 mL/well in 6-well plate the day before transfection. Cells were transfected with 200 picomoles (pM) of each siRNA of control, CK2α, STAT-3, STAT-5 or HIF-1α using Lipofectamine ® RNAiMAX Transfection Reagent (Invitrogen, Waltham, MA, USA) in MEM media with not FBS for 6 h. The transfected cells were then grown in fresh RPMI media adjusted to be 10% HI-FBS as a final concentration for additional 36 h. After 48 h post-transfection, the numbers of surviving cells, which cannot be stained with trypan blue dye, were counted under microscope. For the measurement of each siRNA transfection efficiency, WCL from the transfected cells with respective siRNA was prepared and analysed by Western blotting. Reverse Transcription-Polymerase Chain Reaction (RT-PCR) After treatments, total cellular RNA was isolated using TRIzol reagent (Life Technologies, Carlsbad, CA, USA) according to the manufacturer's protocol. Complementary DNA was then prepared using M-MLV reverse transcriptase (Gibco-BRL) according to the manufacturer's protocol. Three micrograms of total RNA were transcribed and the cDNA prepared above was amplified by PCR with the primers listed in Table S2. Levels of actin mRNA expression were used as an internal control. Small Interfering RNA (siRNA) Transfection HuCCT-1 cells were seeded at a density of 1 × 10 5 cells/2 mL/well in 6-well plate the day before transfection. Cells were transfected with 200 picomoles (pM) of each siRNA of control, CK2α, STAT-3, STAT-5 or HIF-1α using Lipofectamine ® RNAiMAX Transfection Reagent (Invitrogen, Waltham, MA, USA) in MEM media with not FBS for 6 h. The transfected cells were then grown in fresh RPMI media adjusted to be 10% HI-FBS as a final concentration for additional 36 h. After 48 h post-transfection, the numbers of surviving cells, which cannot be stained with trypan blue dye, were counted under microscope. For the measurement of each siRNA transfection efficiency, WCL from the transfected cells with respective siRNA was prepared and analysed by Western blotting. Statistical Analyses Cell count analysis was performed in triplicate. Data are expressed as mean ± standard error (SE) for at least three independent experiments. One-way ANOVA followed by Dunnett's post hoc test was performed using SPSS 11.5 software (SPSS, Inc., Chicago, IL, USA). p < 0.05 was considered statistically significant.
Perspectives in the use of tannins in animal production & health: a review Tannins are a group of polyphenolic compounds that are widely present in plant region and possess various biological activities including antimicrobial, anti-parasitic, anti-viral, antioxidant, anti-inflammatory, immunomodulation, etc. Tannins have traditionally been regarded as “anti-nutritional factor” for monogastric animals and poultry, but recent researches have revealed some of them, when applied in appropriate manner, improved intestinal microbial ecosystem, enhanced gut health and hence increased productive performance. Therefore, tannins are the major research subject in developing natural alternative to in-feed antibiotics. Strong protein affinity is the well-recognized property of plant tannins, which has successfully been applied to ruminant nutrition to decrease protein degradation in the rumen, and thereby improve protein utilization and animal production efficiency. Incorporations of tannin-containing forage in ruminant diets to control animal pasture bloat, intestinal parasite and pathogenic bacteria load are another 3 important applications of tannins in ruminant animals. In conclusion, use of tannins in appropriate manner may help to improve animal performance and health. immunodeficiency (HIV), HT anti-enterovirus 71 in vitro efficiently mortality clinical through the inhibition of viral replication in mice model. Phlorotannins isolated from E. cava have been demonstrated to possess strong activity against influenza virus neuraminidase, porcine epidemic diarrhea virus (PEDV) by inhibiting viral entry and viral replication and HIV-1.Many studies have been conducted on tannins effects against the replication of human immunodeficiency virus (HIV), and the results of the various teams indicate that tannins have several targets of action in the HIV replicative cycle. Ellagitannins isolated from Tuberarialignosa inhibited HIV’s entry into MT-2 cells Other have on ellagitannins (geraniin corilagin) that reduced HIV replication by inhibiting the HIV-1 protease and HIV-1 integrase enzymes Introduction Animal husbandry is one of the important activities of farmers in the developing countries including India and to improve the performance of animals and poultry antibiotic have been used for several decades. However, it is widely believed that use of antibiotics as growth promoters promotes evolution and/or selection of antibiotic-resistant microorganisms in farm animals (Chattopadhyay, 2014). Extensive researches have been done over the last couple decades to search for natural alternatives to in-feed antibiotics and plant compounds (or phytogenic compounds) have been identified to have great potentials (Yang et al.,2015). Naturally occurring plant compounds including tannins, saponins and essential oils are extensively assessed as natural alternatives to in-feed antibiotics. Among them, tannins are the major research subject in developing natural alternative to in-feed antibiotics (Redondo et al.,2014). Tannins are a group of polyphenolic compounds that are widely present in plant region and possess various biological activities including antimicrobial, anti-parasitic, anti-viral, antioxidant, anti-inflammatory, immunomodulation etc. Strong protein affinity is the well-recognized property of plant tannins and has successfully been applied to ruminant nutrition to decrease protein degradation in the rumen, thereby improve protein utilization and animal production efficiency. Incorporations of tannin-containing forage in ruminant diets to control animal pasture bloat, intestinal parasite and pathogenic bacteria load are other important applications of tannins in ruminant animals (Huang et al., 2018). Tannins have traditionally been regarded as "anti-nutritional factor" for monogastric animals and poultry, but recent researches have revealed some of them, when applied in appropriate manner, improved intestinal microbial ecosystem, enhanced gut health and hence increased productive performance. The applicability of plant tannins as an alternative to in-feed antibiotics depends on many factors that contribute to the great variability in their observed efficacies. Chemical structure of tannins Tannins are naturally occurring heterogeneous group of phenolic compounds with diverse structures that share their abilities to bind and precipitate proteins. Tannins are primarily classified into 3 major groups: hydrolyzable tannins (HT), condensed tannins (CT) and phlorotannins (PT). The first 2 groups are found in terrestrial plants while PT occurs only in marine brown algae. Hydrolyzable tannins (HT) Hydrolyzable tannins are made up of a polyol core (commonly D-glucose), which is esterified with phenolic acids (mainly gallic or hexahydroxydiphenic acid). The molecular weight of HT ranges from 500 to 3,000 Da. They are susceptible to hydrolysis by acids, bases or esterases, thus can be easily degraded and absorbed in the digestive tract and may cause potential toxic effects inherbivores. Condensed tannins (CT) Condensed tannins are oligomeric or polymeric flavonoids consisting of flavan-3-ol units that include catechin, epicatechin, gallocatechin and epigallocatechin. Compared to HT, CT has more complex structures and higher molecular weight ranging from 1,000 to 20,000 Da. Unlike HT, only strong oxidative and acidic hydrolysis can depolymerize the CT structures that are also not susceptible to anaerobic enzyme degradation. Phlorotannins (PT) The PT, which are structurally less complex than terrestrial tannins (HT and CT), is formed as a result of the polymerization of phloroglucinol (1,3,5-trihydroxybenzene). Occurrence of tannins Tannins are widely distributed in plant kingdom, especially abundant in nutritionally important forages, shrubs, cereals and medicinal herbs. The CT are the most common type of tannin in forage legumes, trees and shrubs while HT are often present in leaves of trees and browse shrubs in tropical areas. Generally, tannins are more abundant in vulnerable parts of the plants, e.g., new leaves and flowers. The PT is concentrated in the physodes located in the cytoplasm of cells within the outer cortical layers of the thalli. Chemical structures and concentrations of tannins vary greatly among plant species, growth stages and growing conditions such as temperature, light intensity, nutrient stress and exposure to herbivores. Tannins: older concept In the past some researchers often described tannins as anti-nutritional factors because it interferes with nutrient utilization like dietary protein as well as enzymes and also interferes with structural carbohydrate polymers like cellulose and hemi-cellulose. It forms chelate with some minerals like iron and to form the ferrous-tannates insoluble complex and decrease the availability of the iron. It also hampers fibre digestion due to the cellulase inactivation and reduces feed intake due to their astringent effect (Mueller-Harvey, 2001). Tannins: New perspectives Tannins in high concentrations reduce intake, digestibility of protein, carbohydrates and animal performance. Tannins in low to moderate concentrations prevent bloat and increase the flow of non-ammonia nitrogen and essential amino acids from the rumen (McNabb et al., 1993). Condensed tannins are expected to bind proteins with a high affinity, providing protection from degradation by rumen microbes. Forage containing CT has been reported to minimize the detrimental effects associated with a heavy load of internal parasites (Athanasiadouet al., 2001), prevents production of free radicals and may support their scavenging (Cerda et al., 2005). Tannins possess various biological activities including antimicrobial, anti-parasitic, anti-viral, antioxidant, anti-inflammatory, immunomodulation etc. (Huang et al., 2018). Biological activity of tannins Tannins are plant secondary metabolites that serve as a part of plant chemical defense system against invasion by pathogens and attack by insects. Tannins have shown numerous biological activities and some of them, which are most important to the modern food animal production, are asfollows:- Antimicrobial property The antimicrobial activities of tannins have long been recognized and the toxicity of tannins to bacteria, fungi and yeasts has been reviewed. The mechanisms proposed to explain tannin antimicrobial activity include inhibition of extracellular microbial enzymes, deprivation of the substrates required for microbial growth, direct action on microbial metabolism through inhibition of oxidative phosphorylation, metal ions deprivation or formation of complexes with the cell membrane of bacteria causing morphological changes of the cell wall and increasing membrane permeability . Evidences have shown that the microbial cell membrane is the primary site of inhibitory action by tannins through cell aggregation and disruption of cell membranes and functions. Although protein precipitation is a universal property for all tannins, anti-microbial activity of tannins is microbial species-specific and is closely related to the chemical composition and structure of tannins. Generally, antimicrobial activity of tannins against Gram-positive bacteria has been reported to be greater than against Gram-negative bacteria (Smith and Mackie, 2004), because Gram negative bacteria possess an outer membrane that consists of a lipid bilayer structure which is composed of an outer layer of lipopolysaccharide and proteins and an inner layer composed of phospholipids. However, tannins especially CT isolated from several plants have been shown to possess strong activity against Gram-negative bacteria. Phlorotannins also have greater antimicrobial activity than CT and HT. The antimicrobial property of tannins depends on number of hydroxyl groups and liberation of hydrogen peroxide upon oxidation of tannins. Pathogenic bacteria such as Escherichia coli O157:H7, Salmonella, Shigella, Staphylococcus, Pseudomonas and Helicobacter pylori were all sensitive to tannins .Because of the vast sources of tannins, which results in great diversity in their antimicrobial activities, screening and identification of tannins that are effective and specific to target microbes would continuously be a research endeavor. Anti-parasitic property Anti-parasitic properties of tannins have been demonstrated by both in vitro and in vivo studies. The anthelmintic mechanisms of plant tannins have been suggested through "direct" action of tannins on parasite cells by 1) reducing establishment of the infective third-stage larvae in the host thereby reducing the host invasion, 2) reducing excretion of nematodes eggs by the adult worms and 3) reducing development of eggs to third-stage larvae and through "indirect" action by improving the host's resistance to nematodes. Condensed tannins extracted from legume tanniniferous forages such as sainfoin (Onobrychisviciifolia), big trefoil (Lotus pedunculatus), birds foot trefoil (Lotus corniculatus) and sulla (Hedysarumcoronarium) reduced the proportion of Trichostrongyluscolubriformishatched eggs and inhibited egg development of lungworm and gastrointestinal nematodes (mixed species of Ostertagia, Oesophagostomum, Cooperia, Trichostrongylus, and Strongyloides) in a dose-dependent manner (Molanet al.,2002). However, similar to their antimicrobial activities, the anthelmintic effects of tannins vary greatly depending on chemical composition and structure of tannins, the parasite species or growth stages and the host species. Antioxidant property Naturally occurring phenolic compounds have long been recognized as effective antioxidants. The antioxidant property of tannins has wide application in food industry and medical field to prevent oxidative stress related diseases such as cardiovascular disease, cancer or osteoporosis. It has been shown that CT and HT of relatively high molecular weight exhibited greater antioxidant activities than simple phenolics (Hagerman et al.,1998). The number of hydroxyl groups and the degree of polymerization of tannins are considered to be correlated with their abilities to scavenge free radicals (Ariga and Hamano, 1990). Tannins with the most hydroxyl groups are most easily oxidized and therefore possess greatest antioxidant activity. It has been speculated that dietary tannins may spare other nutritive antioxidants during digestive process or they may protect proteins, carbohydrates and lipids in the digestive tract from oxidative damage during digestion (Marshall and Roberts, 1990). The potential of tannins as biological antioxidants has been indicated in many in vitro and in vivo studies. However, the antioxidant mechanism of tannins in animal tissues is unknown. Further research in this area is needed, especially because enhancing antioxidant status is suggested to be one of the most benefits of feeding tannins to animal wellbeing and performance. Anti-inflammatory property Phenolic compounds such as flavonoids, condensed tannins and gallotannins having anti-inflammatory properties. These phenolic compounds such as flavonoids, condensed tannins and gallotannins are known to inhibit some molecular targets of pro-inflammatory mediators in inflammatoryresponses. CT is antagonists of particular hormone receptors or inhibitors of particular enzymes such as COX enzymes. E.g. Proanthocyanidins from grape seed, leucoanthocyanidins from the hot water bark extract of the black spruce showed a strong antiinflammatory activity. Gallotannins exert various biological effects ranging from anti-inflammatory to anticancer and antiviral properties. The mechanisms underlying the anti-inflammatory effect of tannins includes the scavenging of radicals and inhibition of the expression of inflammatory mediators, such as some cytokines, inducible nitric-oxide synthase (iNOS) and COX-2 (Mohammed et al.,2014). It needs to be pointed out that most of the studies in this area were conducted using in vitro models. The efficacy of the anti-inflammatory action of tannins in animal body after digestion needs to be evaluated further in in-vivomodel. Anti-virus property Tannin show antiviral activity by affecting different stages of viral replication, including the extracellular virions themselves, their attachment to the cell, their penetration into the cell and the replication process in the host cell, as well as the assembling of new viral particles, transport proteins, polysaccharides and viral enzymes. In almost all of the above mentioned stages, the tannin activity is due to their ability to bind permanently to the proteins of the capsid or supercapsid, either to specific viral enzymes required for viral replication or to newly synthesize viral proteins evolved in the composition of the new viral particles. Tannins have been shown to have significant activity against some virus, e.g., human immunodeficiency virus (HIV), bovine adeno-associated virus and noroviruses. All the above information demonstrated that tannins possess varying anti-virus activities depending on chemical compositions and structures. In vivo studies are needed to explore the potential of tannins as natural anti-virus agents to be used in animal and poultry industries. Use of tannins in ruminants Tannins especially CT are widely distributed in nutritionally important forages, trees, shrubs and legumes, which are commonly consumed by ruminants. Therefore, the effects of CT on ruminant nutrition, health and production have been extensively studied and reviewed. Ruminal fermentation Condensed tannins can have beneficial or detrimental effects on ruminants, depending on their amount consumed by animals, their type and chemical structure as well as the composition of the rest of the diet, especially CP concentration of the diet (Mueller-Harvey, 2006). It is generally believed that CT in forage in low to medium (<50 g/kg DM) concentration benefit ruminants in terms of improving protein utilization without negatively affecting feed intake and nutrient digestion depending on CT source and analytical method/standard used to determine concentration. Growth Various researchers have shown that tannin supplementation under low to moderate concentration (10-40 g/kg DM) improves animal growth by improving digestive utilization of feed by ruminants, mainly because of reduction in ruminal protein degradation and as a consequence, a greater availability of (mainly essential) amino acids for absorption in the small intestine. Dey et al. (2008) studied the effect of graded levels of CT on growth rate in lambs. Lambs fed on CT-1.5% recorded significantly higher (p<0.05) average final body weight (kg) compared to those given supplement CT-0 and CT-2.0%, while body weight of animals under CT-1.0% was intermediate. The positive response of ADG to 1.5% level of CT in the supplement gives an indication that the binding effect of tannins was pronounced only at this level by supplying protein to the lower gut and subsequently its more efficient use for tissue growth. In a meta-analysis, had demonstrated that tannin supplementation does not affect weight gain, feed intake, or feed efficiency in beef cattle. Milk production Tannin supplementation may result in increased milk production, increased protein and lactose production. This increased concentration of protein is due to the greater availability of intestinal amino acids, especially of methionine and lysine, which are thought to limit milk production. Tannins mainly exert this effect on proteins, but they also affect other feed components to different degrees. Their main effect on proteins is based on their ability to form hydrogen bonds that are stable between pH 3.5 and 8.0 (approximately). These complexes stable at rumen pH and dissociate when the pH falls below 3.5 (such as in the abomasum, pH 2.5-3) or is greater than 8 (for example in the duodenum, pH 8), which explains much about the activity of tannins in the digestive tract (Hagerman et al., 1992). Evidently, the modifications of the digestibility caused by tannin ingestion are mainly associated with changes in the ruminal fermentation pattern, along with changes in intestinal digestibility. The greater concentration of lactose can be explained by greater glucose supply; most lactose synthesis in the mammary gland relies directly on blood glucose, and in ruminants gluconeogenesis mainly involves propionic acid and amino acids. Thus, a greater availability of amino acids would contribute to greater synthesis of glucose, resulting in increase milkproduction. Dey et al. (2014) conducted an experiment to see the effect of condensed tannin supplementation through Ficus bengalensis leaves on milk production in crossbred cows. They concluded that the daily milk yield was significantly higher (p<0.05) in group received supplemented diet. The 4% fat corrected milk yield was also significantly (p<0.01) higher in group received FBLM diet. Likewise, Menci et al. (2021) had stated that the dietary supplementation of tannin extract at the dose of 150 g/day in dairy cows showed no effect on milk quality, whereas in dry season the milk from cows eating tannins showed lower BCFA concentration, C18:1 t10 to C18:1 t11 ratio, and rumenic to linoleic acid ratio. Wool production Clean wool is mainly protein, with high cystine content and the availability of sulphur-containing amino acids (SAA) has significantly affected wool production (Reis, 1963). Similarly, Dey et al. (2008) studied the effect of graded levels of condensed tannin through Ficus infectoria on wool yield in lambs and found that the total wool yield (g) and yield per day (g) were significantly higher (both linear and quadratic P < 0.01) for the treatment CT-1.5 % compared to similar wool yield by lambs in CT-0 and CT-2.0% treatments. Role of tannin in bloat prevention Probably the most successful application of tannins in ruminant production is to reduce frothy bloat. Bloat or tympany is a common digestive disorder in ruminants, caused by the formation of stable protein broth in the rumen of animals fed with high nutritive value legumes including white clover or Lucerne. The condition is characterized by an accumulation of gas in the rumen and reticulum that can impair both digestive and respiratory function. Tannins by precipitating protein during chewing and rumination reduce protein solubility in the rumen thereby decrease bloat occurrence. Therefore, moderate concentration of tannins in the food of animals destabilizes the protein foams which refers them bloat safe. Li et al. (1996) has estimated that as little as 1.0 mg CT/g DM is needed to prevent pasture bloat. Incorporation of CT-containing forage such as sainfoin into alfalfa has been proved an effective method in controlling alfalfa pasture bloat. Similarly, Min et al. (2012) conducted an experiment to see the effect of plant tannin supplementation on bloat frequency. Twenty-six heifers were allocated to 3 treatments that included a control (non-tannin group) and 2 types of tannins (mimosa and chestnut tannins). Plant tannins (1.5% of DMI) were supplemented once daily mixed with a textured feed (500 g/animal). They found that daily supplementing mimosa and chestnut tannins to heifers grazing wheat forage minimized bloat frequency. Tannins as anthelmintic Another major application of tannins in ruminants especially in grazing ruminants is to control digestive parasites. In all grazing ruminants, gastrointestinal nematodes (GIN) are often implicated as a main cause for substantial production losses in extensive farming operations worldwide. Fecal excretion of nematode eggs into the environment during grazing or browsing is a major route for a wide spread contamination and feco-oral infestation of host animals. Repeated use of chemically produced anthelmintics, recommended and prescribed by veterinarians, represents an effective treatment/control program for GIN. However, development of anthelmintic resistance in nematodes, together with the current trend for organic farming, has increased the demand for alternatives to chemoprophylaxis in order to reduce or exclude the use of anthelmintic drugs to control parasites. Hoste et al. (2012) reported that tannin-rich plants show an anthelmintic effect on various gastrointestinal nematodes by affecting different stages of the parasite life cycle. Tannins from mimosa (HT), chestnut (HT) and quebracho (CT) are effective against various intestinal parasites in ruminant. It seems that dietary concentration below 20 g/kg DM of tannins is ineffective in controlling ruminant intestinal parasites. Lopes et al. (2016) conducted an experiment to see the effect of tanniniferous food from Bauhinia pulchella on pasture contaminated with gastrointestinal nematodes from goats. Sixteen cross bred goats that were naturally infected with gastrointestinal nematodes were fed tanniniferous concentrate from the leaves of B. pulchella and compared to a separate paddock of control animals without condensed tannin supplementation. They concluded that condensed tannin from B. pulchella showed anthelmintic activity, affected egg viability and reduced pasture contamination. Tannins as an antioxidant Another application of tannin in ruminants is to reduce oxidative stress. Gulcin et al. (2009) reported that tannic acid is the effective natural antioxidant component that can be used as food preservative agents or neutraceutical. Chaurasiya et al. (2018) conducted an experiment to see the effect of feeding tannin rich Oak (Quercusleucotrichophora) leaves on antioxidant status of parasitic infected goats in Kumaon hills. They concluded that the antioxidant property from GSH, SOD and catalase was significantly increased in oak fed groups than grass fed groups. Hematological values of GSH, SOD and catalase are the representative of antioxidant status of body (Han et al., 2004). Tannin as antimicrobial agent The gastrointestinal tract (GIT) of ruminants is the main reservoir of enterohemorrhagic Escherichia coli O157:H7, which is responsible for food-borne infections in humans that can lead to severe kidney disease. Recent researches have shown that incorporation of tannins or tannin-containing forage into diets reduces food borne pathogens in ruminant digestivetract. Supplementation of chestnut tannin at the concentration of 15 g/kg DM decreased fecal shedding of E. coli for cattle fed hay diets (Min et al., 2007). Huang et al. (2015) also found that lambs challenged with E. coli O157:H7 fed diets containing 36 g of purple prairie clover CT/kg DM shed significant less E. coli O157:H7 than lambs fed diets without CT. Use of tannins in monogastric animals Unlike for ruminants, tannins have traditionally been considered as 'anti-nutritional' factors in monogastric nutrition with negative effects on feed intake, nutrient digestibility and production performance. Therefore, it is almost a common practice in feed industry to minimize the use of tannin-containing feed in swine and poultry diets or to take measures to reduce their dietary concentrations if such feed are used. However, several recent reports showed that low concentrations of several tannin sources improved health status, nutrition and animal performance in monogastric farm animals (Brus et al., 2013). Compared with other domestic animals, pigs seem to be relatively resistant to tannins in the diets and they are able to consume relatively high quantities of tannin-rich feedstuffs without presenting any toxic symptoms. This is likely due to parotid gland hypertrophy and secretion in the saliva of proline-rich proteins that bind and neutralize the toxic effects of tannins. Brus et al. (2013) investigated the effect of chestnut wood tannins and organic acids on growth performance and faecal microbiota of pigs from 23 to 127 days of age. The result indicated that the supplementation of chestnut wood tannins and organic acids can improve the growth performance in period from 82-127 days mainly by reducing harmful E. coli counts and by increasing counts of beneficial Lactic acid bacteria. The mechanisms of growth promoting effects of tannins in monogastric animal are much less understood compared with those in ruminants. The growth promoting action of tannins in monogastric animal relies on the balance between their negative effects on feed palatability and nutrient digestion through protein and enzyme complexation and positive effects on promoting the health status of intestinal ecosystem through their anti-microbial, anti-oxidant and antiinflammatory activities. Compared to the vast sources of tannins for ruminants, sources of tannins used for monogastric animals are rather limited and so far only few have been studied and showed potential as feed additive. The final impact of tannins on animal performance depends on the type of animals and their physiological status, feed, type of tannins and their concentrations in the diets. Wang et al. (2008) conducted a study to see the effect of grape seed proanthocyanidin extract supplementation on growth performance of broilers infected with Eimeriatenella and they concluded that the lowest mortality and the greatest growth gains were recorded in the group of birds fed with GSPE between 10 to 20 mg/kg. In the second experiment they concluded that GSPE supplementation at the level of 12mg/kg of diet significantly reduced the mortality, lesion scores and improve antioxidant status in birds infected with oocysts of E. tenella. Conclusion Apart from the nutritional attributes, their anthelmintic, anti-bloat, anti-oxidative, anti-inflammatory, antimicrobial, anti-methanogenic roles have been well acclaimed under severalstudies. Thus, the use of tannins as feed additives confirms that tannins are a valuable alternative to complement or replace the use of AGPs in industrial livestock production.
CaringGuidance™ after breast cancer diagnosis eHealth psychoeducational intervention to reduce early post-diagnosis distress Purpose Significant cancer-related distress affects 30–60% of women diagnosed with breast cancer. Fewer than 30% of distressed patients receive psychosocial care. Unaddressed distress is associated with poor treatment adherence, reduced quality of life, and increased healthcare costs. This study aimed to evaluate the preliminary efficacy of a new web-based, psychoeducational distress self-management program, CaringGuidance™ After Breast Cancer Diagnosis, on newly diagnosed women’s reported distress. Methods One-hundred women, in five states, diagnosed with breast cancer within the prior 3 months, were randomized to 12 weeks of independent use of CaringGuidance™ plus usual care or usual care alone. The primary multidimensional outcome, distress, was measured with the Distress Thermometer (DT), the Center for Epidemiologic Studies Depression Scale (CES-D), and the Impact of Events Scale (IES) at baseline and months 1, 2, and 3. Intervention usage was continually monitored by the data analytic system imbedded within CaringGuidance™. Results Although multilevel models showed no significant overall effects, post hoc analysis showed significant group differences in slopes occurring between study months 2 and 3 on distress (F(1,70) = 4.91, p = .03, η2 = .065) measured by the DT, and depressive symptoms (F(1, 76) = 4.25, p = .043, η2 = .053) favoring the intervention. Conclusions Results provide preliminary support for the potential efficacy of CaringGuidance™ plus usual care over usual care alone on distress in women newly diagnosed with breast cancer. This analysis supports and informs future study of this self-management program aimed at filling gaps in clinical distress management. Introduction Three and one-half million US women live with a history of breast cancer [1]. Approximately 30-60% of these women experience significant cancer-related distress [2,3]. Multidimensional cancer-related distress manifests along a continuum from normal fears to significant anxiety, depressive symptoms, and/or depression at clinical or subclinical levels [4][5][6]. Approximately 50% of women experience depression [5], depressive symptoms [6], and/or anxiety [4,5] in the acute post-diagnosis period or within the first year [5]. While depression lessens over time, the rate of depression for breast cancer survivors (BCS) remains over twice that of the general population even 5 years later [7]. The 2019 National Comprehensive Cancer Network (NCCN) Guideline for Distress Management endorses early assessment and treatment of cancer-related distress to improve treatment adherence, reduce visits and admissions, and to improve patients' psychological wellbeing [8]. Longitudinal studies in breast cancer support the NCCN's recommendations [9,10]. However, institutional capacity, access to psychological care, and patient acceptance pose barriers to distress management for 70-80% of distressed patients [11][12][13]. CaringGuidance™ program CaringGuidance™ After Breast Cancer Diagnosis is a new unguided, web-based, psychoeducational program developed to address the need for early and accessible self-management of cancer-related distress in newly diagnosed women to overcome institutional and patient barriers [14]. CaringGuidance™ (version 1) www. caringguidance.org described elsewhere [15,16] contains five modules (17 subtopics) of supportive oncology-based psychoeducation and cognitive-behavioral techniques (e.g., cognitive reframing and rehearsal, relaxation), coping skills, problem solving, communication strategies, and validation. Content is user-guided, and offers self-tailored flexibility to explore written text; 72 survivor video vignettes featuring six BCS age 30-70 years, White and Black race, with stage 0-III breast cancer; 20 thought-challenging and reflective journaling exercises; mindfulness meditation guidance; glossary; links to cancer-related resources; and discussion board. CaringGuidance™ was designed by a multidisciplinary team of psychology and oncology professionals as well as BCS [14] to provide a place for women to mentally process automatic thoughts and emotions associated with a new breast cancer diagnosis. Content was informed by prior qualitative research interviews with newly diagnosed women [17,18]. For example, the module What Does This Diagnosis Mean? is comprised of headings and associated content from the varying thoughts shared by women within days of their diagnosis such as, "I can't stop thinking about cancer," "I purposely try to never think about cancer," and "Ignoring thoughts of cancer helps me feel in control." Each heading is followed by evidence-based guidance provided in a neutral, accepting tone. Body image, receiving and accepting support, disclosure, understanding the complexity of meaning in a cancer diagnosis, managing socially constraining behaviors, and moving forward are examples of additional topics explored in the program [15,16]. Program content grounded in data provided by newly diagnosed women during our earlier qualitative work [17,18] is intended to support new users' ability to explore their thoughts and feelings, compare and contrast with what other women shared, and thus receive validation. The efficacy of Internet delivery of cognitive-behavioral techniques (CBT) is supported [19], as is CBT to reduce depression and stress in women with breast cancer [20]. Unguided Internet CBT psychosocial interventions also show promise [21][22][23]. To the best of our knowledge, CaringGuidance™ is one of three fully unguided, Internet psychoeducational interventions with content specific only to women with breast cancer. CaringGuidance™ is unique, however, in that it was specifically designed for the critical earliest post-diagnosis adjustment period consistent with the NCCN Guideline recommendation for early distress intervention [8], while the other interventions were designed for women months [24] to years' post-treatment [25]. Following development and focus group testing of CaringGuidance™ [14], our team conducted this first randomized trial of the intervention. Favorable results regarding feasibility, acceptance, and satisfaction with CaringGuidance™ by newly diagnosed women were reported earlier [16]. Women with program access also reported fewer perceived social constraints than women in the control group [15]. This is a report of findings regarding the preliminary efficacy of CaringGuidance™ on the primary outcome of distress from this first randomized trial of CaringGuidance™. The hypothesis was that women newly diagnosed with breast cancer who accessed CaringGuidance™ over 12 weeks in addition to usual care would report lower levels of distress than women who had access to usual care alone. Consistent with the goal of informing a future effectiveness/implementation trial, potential modifiers of the intervention effects were also explored. Subjects Study methods are described elsewhere [15,16] and summarized here. This trial, led by a single center in Western New York, recruited subjects through distribution of Institutional Review Board-approved (#00003128) flyers in 13 cancer, radiology, and internal medicine clinics in four states within the Eastern and Midwestern United States. Advertisements were run on radio, television, newspapers, and Facebook. Community breast cancer organizations (e.g., American Cancer Society) also distributed flyers. Eligible women were English-speaking, at least 21 years old, and experiencing their first diagnosis of stage 0-II breast cancer in the past 3 months. Access to email and Internet on a desktop or laptop computer was required since CaringGuidance™ was not mobile-capable at that time. Clinics were encouraged to distribute flyers to women as early as possible post-diagnosis. Procedure After screening by phone, eligible subjects provided written consent and were randomized to usual care plus CaringGuidance™ (intervention) or to usual care alone (control). Randomization was determined prior to study initiation using a random number generator to create an allocation sequence in blocks of four. Enrollment occurred from August 2013 to August 2015. Four measurement occasions were collected (baseline and months 1, 2, and 3). All monthly data were self-reported and returned by US mail after which subjects received a $25 Amazon gift card [15,16]. Both groups No restrictions were imposed on usual care. Subjects tracked medical appointments, symptoms, and source of support received to capture usual care during the 12 weeks. All subjects received scripted phone calls from one research assistant (RA) at 28 ± 5 working day intervals to review log entries and assess for adverse events. All calls were digitally recorded, and a 10% sample was reviewed by the PI for script fidelity [15,16]. Intervention group Subjects were informed that a suggested dose of independent CaringGuidance™ use was 20-30 min, 2-3 times per week (i.e., 40-90 min/week for 12 weeks). This suggested dose was estimated according to the traditional 12 hourly sessions of in-person therapy. A brief one-time orientation to the program's three introductory pages was provided verbally or by email. Subjects received a pictorial guide on general website use (e.g., increasing volume, font size). Program engagement was encouraged through automatically generated emails. To support intervention receipt, the RA asked scripted questions during the monthly phone call regarding subjects' perceived ease of program log-in and use. The RA provided a scripted verbal reminder regarding areas of CaringGuidance™ that a subject had not explored [15,16]. Measures Distress, the primary multidimensional [8] outcome, was measured in three ways. Distress Thermometer The Distress Thermometer (DT) is a single-item, 0-10 scale [26]. The DT is accurate assessing distress when compared with the Hospital Anxiety and Depression Scale and the Brief Symptom Inventory-18 with a score of ≥ 4 of 10 associated with poorer performance status among ambulatory cancer patients, including women with breast cancer [27]. Center for Epidemiologic Studies Depressive Scale The 20item Center for Epidemiologic Studies Depressive Scale (CES-D) [28] was used to measure depressive symptoms. Higher scores indicate more severe symptoms. Scores ≥ 16 are clinically significant. Internal consistency is alpha = .90 in patient and alpha = .80 in community populations [28]. In the current study, alpha = .86. Impact of Event Scale Intrusive and avoidant thoughts anchored to the breast cancer diagnosis were measured with the 15-item, 4-point Impact of Event Scale (IES) [29]. A score ≥ 9 indicates an impactful event. Scores ≥ 26 represent strong impact demonstrated by intrusive/avoidant thinking. Cronbach's alpha for the entire scale equals .86 [29] and in this study, alpha = .87. Demographics and exploratory psychosocial variables Selfreported demographic variables were collected at baseline including subject's history of computer use (8-item yes/ no), prior breast cancer diagnosis of family/friend (yes/ no), health literacy (a single-item "When you go to the doctor's office, how confident are you filling out medical forms by yourself" ("extremely" to "not at all") [30]), and a study-derived single-item (yes/no) question on stressful events in past year. At baseline, and again monthly, study-derived questions were used to measure history of mental healthcare (3-item yes/no), with the remainder single-item responses on perceived support in the past week (1 "not at all"-10 "greatly"), level of personally modifiable causal attribution for cancer (0 "not at all" to 5 "extreme"), sense of control over cancer and treatment (0 "not at all" to 5 "extreme"), and self-perception of coping (1 "not well at all"-10 "extremely well"). Dispositional optimism was measured at baseline because of its well-established association with psychological adjustment [31]. The Life Orientation Test-Revised (LOT-R) [32] was used, in which higher scores on this 10-item scale indicate greater optimism. The LOT-R has alpha = .78 [32], and alpha = .81 in this study. Coping was measured at baseline and monthly using the Brief COPE [33], a measure of 14 coping responses rated on a 1 ("I haven't been doing this at all") to 4 ("I've been doing this a lot") scale. The two-item Active Coping subscale (alpha = .68) [33] was examined for this study in which alpha = .67. Intervention usage Minutes of use, number of sessions, mean log-in duration, and the type of program material accessed were captured by the CaringGuidance™ data analytics system [16]. Sample size Power analysis indicated an estimated sample size of 54 subjects (27/group) for repeated measures ANOVA with four time points, small to medium effect size of .35, average correlation coefficient of .5, and alpha .05 [34]. Effect size was estimated based on prior publications of unguided, web-based CBT interventions for cancer-related distress [25,35]. Projected attrition was 23% based on our prior work with newly diagnosed women undergoing cancer treatment during psychosocial studies [17,18]. Additionally, we planned to compare baseline mood differences among women completing baseline measures before versus after primary surgical treatment to inform our future work and an interim analysis was planned to prepare a grant submission. Thus, the target enrollment was set at 100 subjects. Descriptive statistics were calculated on all study variables. Spearman's correlations were calculated between demographic and study variables. Due to non-normality in depressive symptoms and impact of events, these variables were transformed using a square root transformation prior to analysis. The primary analyses were performed using multilevel modeling (MLM) [36,37] (i.e., hierarchical linear modeling, or mixed effects models). Parameter estimates were obtained using restricted maximum likelihood estimation and Kenward-Rogers degrees of freedom for tests of significance [38]. MLM utilizes all available data through the use of maximum likelihood estimation (i.e., no listwise deletion), so all subjects with at least one measurement occasion are used in the analysis. Models included random intercepts and slopes across subjects, as well as an unstructured covariance matrix to estimate the covariance between intercepts and slopes. Direct tests of intervention effects were assessed by time by group interaction. Separate models were performed for each outcome variable. The MIXED procedure in SAS version 9.4 was used for these analyses. For exploratory tests of moderation, baseline levels of demographic variables of interest based on evidence pertaining to breast cancer-related distress [39] were included to test if the intervention was more or less effective for certain women. The moderators tested were age, income, prior mental health diagnosis, stressful life event in past year, surgical status at baseline (pre/post), breast cancer stage, perceived support, causal attribution, optimism, coping (active and perceived), and baseline distress, depressive symptoms, and impact of cancer event. Results Of 139 women screened, 100 were enrolled and randomly assigned to condition (43 control; 57 intervention). Nine control and eight intervention subjects withdrew or were lost to follow-up resulting in 17% attrition (Fig. 1). Attrition did not bias treatment effects as there were no significant differences between groups on dropout rate, number of time points completed, or the last time point completed [15]. Enrolled subjects resided in 5 states within the Eastern and Midwestern United States [16]. The last subject completed participation in November 2015 (Fig. 1). Table 1 provides descriptive demographic, treatment timing, and intervention usage data. Baseline characteristics The intervention and control groups did not differ on baseline demographic characteristics with the exception that income was slightly higher in the intervention group (p = .042) ( Table 1). Income was not correlated with other baseline variables ( Table 2). The intervention and control groups did not differ on cancer stage (p = .93) nor time since diagnosis (p = .89), with 89% of subjects being within two or fewer months of diagnosis at baseline (Table 1). A greater proportion of the control group completed baseline measures prior to receiving breast cancer surgery (76.7%, n = 33) than the intervention group (57.9%, n = 33) (p = .049) ( Table 1); however, this did not bias treatment effects because the groups did not differ on baseline distress (i.e., DT, CES-D, and IES) and equal proportions of subjects in each condition also demonstrated clinically significant baseline distress (i.e., DT ≥ 4, CES-D ≥ 16, or IES ≥ 26) ( Table 1). Table 2 presents the baseline Spearman correlations for demographic and psychosocial variables. Usual care (both groups) Groups did not differ during the study with respect to the months when breast cancer surgery or chemotherapy was received. However, more intervention subjects received radiation during month 2 than control subjects (p = .03). Both groups reported accessing clinical support services from healthcare providers (e.g., physicians, social work, psychology) equally in all study months except month 3 when intervention subjects reported accessing clinical support services fewer days on average than control subjects (p = .023) ( Table 1). Groups did not differ (p > .05) on duration of the monthly RA phone call thus minimizing potential bias from research staff interactions with subjects [16]. Intervention effects No significant overall time by group interactions were observed. Post hoc analysis showed significant differences in slopes between groups between study months 2 and 3 on depressive symptoms and distress (measured by the DT). In other words, from baseline to study month 2, both groups experienced a decline in distress, depressive symptoms, and intrusive/avoidant thoughts. However, from study month 2 to month 3, the intervention group continued to decline, whereas the control group experienced an increase on all three measures. The slope difference between groups during the month 2 to month 3 interval was significant for distress measured by DT (F(1,70) Moderators of intervention effects Exploratory analysis identified four variables that appeared to moderate the effects of CaringGuidance™. Figures 3 and 4 present model-predicted scores based on group and varying levels of the moderators. The effect of CaringGuidance™ on distress (measured by the DT) varied depending on initial baseline DT score [t(77) = − 2.18, p = .032]. No difference was observed for subjects low on initial DT distress scores. For subjects with higher DT scores at baseline, however, distress reduction was greater for the intervention than the control group (Fig. 4). Active coping significantly moderated the effect of the intervention on depressive symptoms [t(84.2) = 2.12, p = .037] such that while there was no effect of the intervention on depressive symptoms for subjects high in active coping at baseline, for subjects low in active coping, greater reduction in depressive symptoms occurred over time (Fig. 3). The intervention effect on depressive symptoms was also significantly moderated by personal causal attribution beliefs [t(80.8) = 2.02, p = .047]. For subjects taking little to no responsibility for their breast cancer, the intervention had a reducing effect on depressive symptoms. But for subjects who felt "somewhat" to "extremely" responsible for their cancer Fig. 1 CONSORT flow diagram. A superscript letter "a" denotes that subjects did not complete month 1, month 2, and/or month 3 study measures, and did not withdraw/discontinue. A superscript letter "b" denotes that all subjects allocated to a study condition were included in the analysis Table 1 Baseline characteristics, treatment, and CaringGuidance™ Usage (N = 100) [15,16] diagnosis, depressive symptoms increased among the intervention group (Fig. 3). The effect of the intervention on intrusive/avoidant thinking was significantly moderated by whether subjects experienced a stressful event in the prior year [t(78.6) = 2.41, p = .018]. For subjects who reported a prior stressor, there was no difference in change in the amount of intrusive/avoidant thinking as a result of the intervention. But, for those with no major stressor in the prior year, the intervention group demonstrated reduced intrusive/avoidant thoughts more so than the control group (Fig. 4). No other baseline variables explored were identified as potential moderators of the intervention. Discussion In this first randomized trial to evaluate the preliminary efficacy of CaringGuidance™, 100 women residing in five US states, diagnosed with early-stage breast cancer in the prior 3 months, were randomized to 12 weeks of unguided use of CaringGuidance™ plus usual care (intervention) or usual care alone (control). As anticipated, both the intervention and control groups experienced reductions in distress, including depressive symptoms and intrusive/avoidant thoughts over the study period. However, preliminary support for the efficacy of CaringGuidance™ was provided as a product of the control group's increase in distress and depressive symptoms and the intervention group's decrease in distress and depressive symptoms between months 2 and 3. This outcome could not be explained by differences at baseline, timing of the receipt of cancer treatment during the trial, or clinical supportive services utilized since the control group, for whom distress and depressive symptoms increased, received fewer treatments and accessed more supportive services compared with the intervention group during the period in which the group differences were observed. These results extend what we previously reported on the intervention group's use of and satisfaction with CaringGuidance™ [16]. These findings also contribute to the limited research on the efficacy of unguided web-based interventions specifically aimed at breast cancer-related distress. Additionally, this may be the first report of a web-based CBT intervention for women initiated within the first months of diagnosis. This work is of value given that BCS are the largest group of cancer survivors in the USA [1], and unguided interventions which by their nature do not rely on clinical resources are potentially sustainable [21] while also being efficacious in mental healthcare [22,40]. Baseline psychosocial characteristics of our sample reflect those in published research [39,41,42] such that greater optimism, and perceived coping, were negatively associated with distress and depressive symptoms and, as expected, younger age and depressive symptoms were also negatively associated [41]. Contrary to existing evidence [39] however, younger age was not related to greater distress (DT) or intrusive/avoidant thoughts. This finding may be indicative of the study's small sample and will be explored in future studies of this intervention. The study results are consistent with the other two existing web-based unguided programs specifically for breast cancer [24], and contrary to BREATH, we saw a potentially greater benefit of the intervention for women who were highly distressed at baseline, whereas the BREATH found the opposite [24]. Given the exploratory and pilot-nature of our study, this outcome should be examined further in a larger sample of women. Our results are also consistent with the 12-week unguided group intervention for longer-term BCS, SURVIVE, in that neither study identified significant effects on distress as measured by the IES [2 4 ] . H o w e v e r, i n c o n t r a s t , CaringGuidance™ demonstrated preliminary efficacy with respect to distress manifesting as depressive symptoms while neither the BREATH [24] nor SURVIVE studies [25] measured depressive symptoms. Given the long-term burden of d e p r e s s i o n a m o n g B C S [ 7 ] , f u r t h e r s t u d y o f CaringGuidance™'s potential to reduce depressive symptoms in newly diagnosed women and the role active coping may play in moderating these effects is warranted. Finally, our findings point to a need to further explore the relationship of CaringGuidance™ and causal attribution among newly diagnosed women. Evidence on the effects of causal attribution beliefs on psychological adjustment to breast cancer is inconsistent [43,44]. In this study, exploration of potential modifiers of CaringGuidance™ effects identified that women with access to the intervention who held high personally modifiable causal attribution beliefs about their cancer experienced an increase in depressive symptoms as compared with women with low personal causal attributions. The reason for this is unclear and warrants additional study. Women were asked in this study to rate the degree to which they attributed the cancer diagnosis to their actions or inactions, but not the specific attribution or why they held their beliefs. Despite CaringGuidance™ content on the improbability that a single personal action or inaction results in a woman's breast cancer, it is possible that certain beliefs could not be overcome to affect women's depressive symptoms. Several limitations of this study must be considered. First, the sample size was based on estimated effect size from the literature and may have resulted in a lack of power to detect interaction effects. Second, multiple potential intervention modifiers were explored with the aim of informing our future work, and thus these results were viewed with caution given the number of modifiers tested. Furthermore, in order to reduce subject burden, we opted to use single-item, studyderived questions to measure causal attribution and stressful events in the prior year rather than using lengthier existing tools which may have affected results. Nevertheless, these findings inform us of the need to further explore these variables with valid instruments in a future larger study to see whether these exploratory outcomes hold. Third, no lower limit was placed on baseline distress for study inclusion, potentially resulting in a floor effect for subjects without initial clinically significant symptoms. Future studies will require all subjects to have a clinically significant score on the DT, CES-D, or IES at baseline. Furthermore, despite our attempts to recruit African American women, the sample is primarily White and therefore not representative of all US women with breast cancer. Ability to read English was also required because CaringGuidance™ was developed in English and for the most part, subjects were college-educated and computerexperienced. While the sample characteristics may limit generalizability, this sample is typical of samples of women who participate in trials published within breast cancer psychosocial literature. Finally, although the scripted monthly phone calls were made by one RA to all subjects, future results may differ without this contact. Conclusion This study provides initial support for the preliminary efficacy of CaringGuidance™ when compared with usual care alone, as an unguided, web-based, self-management tool for breast cancer-related distress. Particularly promising for future study are exploratory outcomes favoring greater distress reduction for women with program access who either infrequently used active coping strategies or experienced high levels of distress after diagnosis. Together with the feasibility and satisfaction outcomes reported earlier [16], the current findings highlight the promise of CaringGuidance™ to contribute to filling a current gap in distress management for women with newly diagnosed, early-stage breast cancer and warrant further study of this intervention. Ethical approval All procedures performed in this study involving human subjects were approved and in accordance with the ethical standards of the PI's university and that of institutions involved in the referral of subjects as well as with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent Informed consent was obtained in writing from individual participants included in this study. Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Optimization of Subway Advertising Based on Neural Networks Subway advertising has become a regular part of our daily lives. Because the target audiences are high-level consumers, subway advertising can promote the return on investment. Such advertising has taken root in various countries and regions. However, a lack of appropriate oversight, a single-track operating mode of subway advertising, and unclear price standards significantly reduced the expected advertising effects and the reasonableness of advertising quotations.+e shared biking services have gained a great amount of attention in the past few years. Besides, more citizens get involved in using public transportation, which provides a basis for analyzing subway passenger characteristics. First, we examined the use of shared bikes around subway stations to obtain the information on passengers’ age. +en, using daily passenger flow volume, transfer lines, and the original subway advertising quotes, we trained backpropagation neural networks and used the results to provide new quotations. Finally, we combined passenger age structure and different passenger groups’ preferences in every station to identify the most suitable advertisement type. Our goal was to make full use of transportation big data to optimize advertising quotations and advertisement selection for subway stations. We also proposed the using of electronic advertising board to help increase the subway advertising profits, decrease the financial pressure of operations, and boost the public transportation development. Introduction With the increasing popularity of subway and shared bikes, both of which are commonly agreed to be environmentally friendly transport modes, the "subway + shared bike" is being chosen by more and more passengers in urban China. By the end of 2016, 30 cities in mainland China have had their own subway systems, with 113 lines and a total length of 4152.8 kilometers. e newest subway systems are in the cities of Fuzhou, Dongguan, Nanning, and Hefei, within which the passenger flow has increased to 16.09 billion, being an annual growth of approximately 16.6%. According to statistics and analysis report of subway rapid transit system of year 2016, 70% of urban passengers are transported by the subway. Year 2016 also witnessed the rapid development of shared biking services, with total capital financing reaching 7 billion yuan by March 2017. ere are more than 30 brands of biking services, among which, Mobike and ofo have completed a new round of financing. Using the city of Shanghai as an example, shared bikes influence local residents' daily life, to a great extent. At the same time, subway stations have largely extended their life cycles. For 275 out of the 304 subway stations (transfer stations not included), the area of access for bike users exceeds 500 meters. In current Shanghai, this new form of transport mode, i.e., "shared bike plus subway," has significantly reduced the heavy burden on urban transportation, especially for the Middle Ring, which is a ring road between Inner Ring and Outer Ring, covering most downtown areas. e "shared bike plus subway" pattern has become a first choice for many citizens, which well satisfies their needs by providing a convenient and efficient journey. e 2017 shared bike data travel report for 1 km around Shanghai Metro, released by DT finance and economics, suggested that the efficiency of "shared bike plus subway" pattern increased to 1.6 times of the original and 80% of Shanghai subway passengers used shared bikes to travel among their work place, home, and subway stations. In addition, shared bike users and subway passengers are reported to have a similar age range. Today's data processing technologies are quite advanced, as such it is feasible to analyze shared bike routes using the Origin-Destination matrix. is analysis helps us better understand passenger characteristics to lay a solid foundation for optimizing subway advertisements. Subway advertisements are embraced by many advertisers because their placement is associated with a great population flow, fixed target audience, and various advertising forms. Compared with other media, subway advertisements are subject to fewer administrative and legal requirements in China. Hence, subway advertisements are expanding quickly. Regarding the content, the form, or the innovation, the advertisements are full of vitality. However, subway stations are roughly classified into three levels of advertising pricing currently: S level, A++ level, and A level. Subway passengers mainly include office-goers and white-collar professionals with an advanced educational background, a high level of income, and consumption capability. ey are considered to be the target audience for subway advertisements. Among all of the themes, the most attractive subjects are "entertainment," "IT products," "highquality clothing," and "tourism." As subway passengers are rational, not all products are appropriate for being displayed on this platform. If all subway advertisements were designed to gain the attention of subway passengers, they are likely to get better results. Advertising has the potential to be optimized and to be more reasonable in pricing. Subway Advisements Are Disconnected to People's Interests, At Least in terms of Content. Advertisers are generally more concerned about the placement and effect of advertisements. ey prefer advertising with a high ability to draw attention, high exposure rate, good adaptability, large sizes, with reasonable prices, and in subway stations with large passenger flows. Without further analysis to match products with consumers' preference, advertisers must face high costs and low efficiencies. Pricing of Subway Advertisements Is Unreasonable. e subway advertisement pricing is currently only related to the subway station level, the form, and its location. Other attributes are not considered even though they may significantly affect the audience. Unreasonable prices reduce revenue for both advertising agencies and the subway industry. Cost for subway construction is huge for a city. Pricing subway advertisements in a more reasonable manner would help shorten the payback period for the subway construction and increase subway company's revenue. is leads to a solution benefiting everyone. When it comes to advertising, the dark tunnel walls can be used to display advertising films and public service information to subway riders. is opportunity can increase profits for advertisers and transportation authorities and can attract more state funds (see [1,2]). In the field of big data, Babar and Arif [3] proposed a smart city architecture, using big data analysis to plan urban facilities. Cano et al. [4] proposed perspectives on big data applications using health information, with the goal of real-time analyses of real-time high-volume and/or complex data from healthcare delivery and citizens' lifestyles. Nachiappan et al. [5] pointed out that replication and erasure coding are the most important data reliability techniques used in cloud storage systems for big data applications. In the field of backpropagation neural networks, the most important task is solving the issue of multiuser detection in the non-Gaussian noise multipath channel [6]. Hilal solved the issue of multiuser detection in the non-Gaussian noise multipath channel. He also paid a close attention to the neural network applications and proposed a new robust neural network detector for multipath impulsive channels. e maximal ratio combining (MRC) technique was adopted to combine the multipath signals. Moreover, he discussed the performance of the proposed multiuser neural network décor-relating detector (NNDD), under class A Middleton model. Furthermore, it showed the performance of the system under power imbalance scenario. ey indicated that the proposed NNDD had a magnificent effect on the system performance. e system performance was measured through the bit error rate (BER). Error backpropagation training is a key algorithm to train the neural networks [7]. Prediction accuracy is evaluated using a practical application from the aluminum smelting industry. e dynamic behavior of aluminum smelting makes the particular application well suited to neural network modeling [8]. Yasser et al. [9] proposed the implementation of the SAC (single assignment C) method using a neural network with offset error reduction to control an SISO (single input single output) magnetic levitation system. Feng et al. [10] considered the primary decisions and activities that arise during backpropagation (BP) neural network model construction, selection, and validation for this novel application. Before using the neural network, training is required until the network fits properly. Many optimized algorithms have been proposed, such as unconstrained optimization [11] and ant colony optimization algorithms [12]. In the field of congestion charge, which means in order to ease the traffic jam the government charges the road users, Eliasson [13] presented a cost-benefit analysis of the Stockholm congestion charging system, using the observed data rather than the model-forecasted data. e most important data sources are travel time and traffic flow. Sachan and Kishor [14] proposed optimum locations for the charging of electric vehicles (EVs); to determine the optimum location, the 24-hour load demand was changed at given junction nodes and the corresponding voltage sensitivity indexes were determined. A literature review revealed that significant work had been done on advertising optimization by using big data and applying backpropagation neural networks. ese have had a significant impact in recent years, and scientists and enterprises have contributed to theory development. However, up to now, only few studies have applied big data and backpropagation neural networks to optimize advertisements in the subway system; there is a particular lack of research on pricing using backpropagation neural networks and empirical researches. is provided the research opportunity pursued here. In order to better highlight the characteristics of advertisement in subway system, the author of the paper compared it with other systems. Due to the specialty of public transportation, both subway and bus have fixed stations with fixed passengers. In view of this, subway advertisement will attract more attention from targeted customers. Meanwhile, according to the 2018 Shanghai comprehensive transportation annual report, the transit rides share of subway in 2017 was 54%, the share of bus was 34%, and the share of taxi was 11%, which means that the subway has a larger volume of passenger flow and mobility than other systems. is substantially increases the effectiveness of subway advertisements. Furthermore, the report mentioned above also revealed that the average passenger volume of subway is 9.69 million trips per day, reaching 10 million trips during working days. In accordance with that, we may state that the subway has become the first public transportation choice of many commuters, resulting in the peak periods of subway. at will require a time-of-use pricing system to make the whole advertising quotation more reasonable. erefore, to improve the subway advertisements, we conducted an analysis using "shared bike plus subways" data. e article proposes a novel model to generate the price and types of subway advertisements in different stations and at different times based on neural networks. Backpropagation (BP) neural networks were originally developed by a group of scientists led by Rumelhart and McCelland. ese networks are multilayer feedforward, trained using an error inverse propagation algorithm. BP networks learn and store many input-output mapping relationships, without the need to reveal mathematical equations that describe this mapping in advance. Its learning rule involves using the steepest descent method and continuously adjusting the weights and thresholds of the network by backpropagation. is minimizes the sum of squares of the network errors. is BP method is expected to be feasible even with a large dataset. In fact, there are many elements affecting the price, for example, the age structure of travelers in different time periods and at different stations. ese factors are usually ignored by subway advertisement agencies. Subway advertisement prices should follow a specific trend. e neural network has a strong ability to complete a nonlinear simulation, self-organizing, and self-learning, which could be adapted to optimize subway advertisement prices. e subway advertising optimization program is expected to reduce total subway advertising costs, improve advertising effectiveness, increase subway advertising agency revenue, and increase subway system income. All of these benefit livelihoods. Optimization Methodology and Purpose Using big data related to shared bike usage, this paper analyzed a passenger age structure for travelers at different subway stations at different time periods, combined that information with the advertising preferences of different passenger groups, and determined the most suitable advertisement types for different time periods. Finally, this paper proposed the electronic advertising board, which is when one or more advertisers sharing one advertising board during the service time, and alternating between them based on shifts in the target passenger groups. Combined with the number of transfer lines and existing advertising quotation for the stations, and based on the model of neural networks, this study developed a new optimized quotation for each station and new price list to adapt to an optimized scheme for subway advertising. ese following four items summarize the goals of the study: (1) To develop the advertisement types of catering to consumer preferences and improve advertising effects: subway passenger groups change at different times; as such, the current single-track full-time advertising is no longer the best approach. It is important to update and alternate advertisements as passenger group changes. is electronic advertising board would improve the effective publicity of these ads. (2) To increase the profit of advertising agencies: based on this study's proposed improvement, it is better to offer specific advertising quotations based on different stations and time periods. is will also improve advertising agencies' profit and increase the advertisements' effects. (3) To provide a valuable reference for the subway advertising industry: our optimizing analysis can provide a reference for the subway advertising industry in terms of improving advertising reasonableness and offering a high-profit quotation. (4) To promote the return on investment for subway and benefit the whole society. Research Idea 3.1. Data Acquisition. Using shared bike data, this paper analyses the age structures of riders travelling at different time periods in different subway stations. Using existing transfer scale and advertising quotations for each station, this paper optimizes the quotation using a neural networks model. e model accounts for the different preferences of different passenger groups and proposes the best advertisement type for each subway station. First, Mobike (a shared bike company) usage data were collected for each time period. ey include the origin and destination of the used shared bike, specific service time and duration, order id, and bike id. According to a recent industry report released by the Sootoo Research Institute, an authoritative third-party data research institution in China, Mobike has taken 57% of the market share of shared bikes, and this proportion is still increasing. us, the passenger age range of Mobike company is representative to the whole shared bike market. Also, according to the report mentioned above, more than 80% of the subway users will choose shared bikes to travel the first or last 1 km, which means the age structure of these two is similar. Based on Mobike's market share, the overall usage of shared bikes around subway stations was calculated. Second, the passenger age structures for different Shanghai subway stations at different time periods was determined based on the collected data. e formula is as follows: where P(n | A): different age proportions using the travel mode of "shared bike plus subway;" P(n | U): the ratios of different age groups travelling by subway; A: shared bike usage around different subway stations for different time periods and age groups; a: market share; U n : the passenger flow volumes for different time periods, age groups, and subway stations; and n: different age groups (n � 1 means passengers below 18 years; n � 2 means passengers between 18 and 40 years; n � 3 means passengers between 41 to 50 years; n � 4 means passengers over 50 years). ird, based on the data collected through transportation cards, the passenger flow volumes in different time periods of each subway station on the research metro line were obtained. en, the ratio of subway use to all modes of transportation was calculated to determine the daily overall passenger flow volumes for each subway station. Fourth, because subway advertising agencies have ranked all the subway stations, the daily advertising costs differ for different stations. is requires further pricing classification. However, there are currently only three levels in the rankings, which is insufficient for classification. As such, to improve the possibility of developing an optimized scheme for the future market, this study introduced the advertising cost per passenger, i.e., the station's total cost, divided by daily overall passenger flow volume. Fifth, using the calculated subway passenger age proportions and the known preferences of different passenger groups, we calculated the proper advertisement types for each time period. Ultimately, this paper proposes the electronic advertising boards, which is when two or more advertisers share one advertising board during the service period, alternating based on shifts in target passenger groups. In addition, a new quotation scheme is proposed based on the sample of current advertising costs, to adapt to the new subway advertising scheme. Neural Network Training Based on the Backpropagation Algorithm. In machine learning, specifically deep learning, backpropagation (backprop, BP) is an algorithm widely used in the training of feedforward neural networks for supervised learning; generalizations exist for other artificial neural networks (ANNs) and for functions generally. Backpropagation efficiently computes the gradient of the loss function with respect to the weights of the network for a single input-output example. is makes it feasible to use gradient methods for training multilayer networks, updating weights to minimize loss; commonly one uses gradient descent or variants such as stochastic gradient descent. e backpropagation algorithm works by computing the gradient of the loss function with respect to each weight by the chain rule, iterating backwards one layer at a time from the last layer to avoid redundant calculations of intermediate terms in the chain rule; this is an example of dynamic programming. e term backpropagation strictly refers only to the algorithm for computing the gradient, but it is often used loosely to refer to the entire learning algorithm, also including how the gradient is used, such as by stochastic gradient descent. Backpropagation generalizes the gradient computation in the Delta rule, which is the single-layer version of backpropagation, and is in turn generalized by automatic differentiation, where backpropagation is a special case of reverse accumulation (or "reverse mode"). Backpropagation neural networks contain three or more layers: the input layer, hidden layer, and output layer [15]. e upper layer and lower layer are fully connected, but neurons in the same layer are separated. ere is a specific weight between the neurons of the input layer and the neurons in the hidden layers. is is called the signal strength. e hidden layer or the output layer process and integrate the information from the upper layer. In many cases, a threshold will be considered, simulating a nervous system in the real world. In biology, a threshold represents the minimum amount of stimulation to active reactions. e neurons are not active until they reach a threshold value, and the integrated information can be transmitted. Once learning samples are received by the neurons, the valve of the neurons will open, spreading the information to the hidden layers until they reach the output layer. e output layer will generate a response, which is propagated back to all neurons by balancing and adjusting many weights and making uneducated guesses about how to fine-tune them to ensure a bad decision is not made. e key to the BP algorithm is a negative gradient decline, which identifies the set of weights that minimizes the error in the fastest declining direction. e goal of any supervised learning algorithm is to find a function that best maps a set of inputs to their correct output. e motivation for backpropagation is to train a multilayered neural network such that it can learn the appropriate internal representations to allow it to learn any arbitrary mapping of input to output. Because of the high efficiency of neural networks, it is widely used in various fields, such as water quality assessment [16], fast approximate estimation of data [17], fire sprinkler point data prediction [18], and mechanical performance prediction [19]. e design of the BP neural network is different according to the research content. e main analysis steps are as follows, which is shown in Figure 1. (2) Determine the initial parameters of the network. is involves defining the maximum training parameters, the number of nodes in the hidden layers, learning rates, and the acceptable error after training. (3) Initialize the weights w ij to random. Initialize the threshold. Both weights and thresholds are initially set as random values. e weight between the first layer and the second layer is defined as w ij (t); the thresholds of the second layer is defined as B ij (t). (4) Compute the input and output of the first layer neuron. Assume that X is the input data. Assume the input and output of the first layer neurons equals the original data. is means that O 1 � X. e input and the output neuron are inconsistent in dimension; as such, the function [pn, min p, max p, tn, min t, max t] � premnmx is applied for normalization. (5) Compute the input of the second layer. For the second layer, the input I 2 is determined by the value of all neurons and thresholds. e function is as follows: I 2 � w ij × X + B ij × ones. In this function, "ones" is the matrix with all elements equaling one. Mathematical Problems in Engineering (8) Compute energy function E. e energy function E is the square sum of error between outputs from the networks and samples; it is designed as a signal to stop the training once acceptable. Suppose the outputs of the networks are Y, Calculate the adjustment of weights and thresholds between the second layer and the third layer. is is a very key point with respect to BP neural networks; in fact, BP neural networks are based on partial differential: (2) (10) Calculate the adjustment quantity between the first layer and the second layer: (11) Calculate the adjusted weights and thresholds. Next, we add weights and thresholds and their adjustment quantities. ey are weights and thresholds at time t + 1. (12) e outputs of the network should be reverted. (13) Make a forecast based on the trained function, obtain the result D p . w ij : connection weights between the first layer and the second layer w jk : connection weights between the second layer and the third layer Data Filtering. Shanghai Metro Line 1 is one of the city's oldest and busiest metro lines, and it was selected as the case study for this research. Running north-south from Fujin Road to Xinzhuang, Shanghai Metro Line 1 travels across the city through a rich diversity of urban areas. Original data for the study mainly came from the Mobike Technology Company, Shanghai Telecom, Shanghai Metro, and some public websites [20,21]. e data were extracted and analyzed using the following steps. First, Shanghai Metro Line 1 runs from 05:30-23:30 each day. Data from Mobike Company were classified into the following time periods: 5:30-10:00; 10:00-14:30; 14:30-19: 00; and 19:00-23:30. ese ranges reflect peak hours. Reports from Sootoo Research Institute showed shared bike users age distribution (Table 1) and Mobike market share, which is 57%. ese data provide a solid foundation to estimate the use of shared bikes by different ages of people during different time periods in different subway stations. Second, by applying Bayes' theorem, this paper estimates the age structure in different time periods at all subway stations along Shanghai Metro Line 1. e variable P(n | A) is obtained from a report on the Shanghai Subway Passenger Flow Analysis, as shown in Table 2. en, the calculated results are presented in Table 3 (the data were partially adjusted based on the catalog of the sixth census of Shanghai). ird, we counted the passenger flow and the number of transit lines. e data were provided using data from Shanghai public transportation cards, which excludes cases such as passengers using tickets. Given that 86% of passengers use the Shanghai public transportation card, the passenger flow in certain subway stations was estimated for this study. Current subway advertising prices were vital data points for neural network learning. ere are only three levels (S level, A++ level, and A level). As such, to make the analysis more reasonable, the "cost per passenger" is introduced as the output of the neural network learning. Cost per passenger is calculated as follows, and the results are presented in Table 4. Cost per passenger � Total cost Daily passenger flow . Finally, a survey based on a random sample of two thousand users in Shanghai of China Telecom was conducted to investigate what types of advertising will attract people of different genders and age groups (shown in Figures 2 and 3). Since the Mobike app requires a mobile phone, every Mobike user is assumed to have a mobile phone. Since China Telecom has a certain market share in Shanghai, the data have a solid degree of universal application and can be applied to subway users. Understanding passenger preferences is a key to maximizing advertising utility. Data Processing. After grouping the data by age and time period, and combining the weight of the coefficient, all the data can be used as inputs and outputs for the neural network training function. Specifically, the data were listed into 7 groups: the passenger flow at different time periods in different subway stations; the number of transfer lines; and the percentage of passengers in different age groups, at different time periods, and in different subway stations (listed in 4 groups, that is, passengers below 18 years, between 18 and 40 years, between 41 and 50 years, and over 50 years); and the advertisement costs per person. e original passenger flow data were counted on daily basis. To generate more precise output, flow data were split into several time periods during neural network training. e current price of subway advertisements does not consider passenger age structure; as such, it was treated as an equal value in training function. For forecasting, processed data were input to obtain the age structure at different time periods, so the results could improve the rationality of the pricing. Advertisement Quotation Optimization. MATLAB can be used to generate BP Neural Network Training Codes, by typing in the original data and trying the network until the Mathematical Problems in Engineering 7 degree of fit reaches a certain value. is trained network is then saved for later predictions. e training function was used to obtain the advertising cost per person for each subway station at different time periods. e result was multiplied by the corresponding passenger flow volume to obtain the advertising cost for this time period. Table 5 shows the optimized quotation scheme. Advertising Optimization. Based on the preferences of each age bracket, we can get the age structure during different subway stations and time periods (e.g., seen in Table 6). From e advertisement types, which contain 14 catalogs, are the same as the data collection of China Telecom. Combined with 2017 Shanghai urban big data active report, released by DT finance and economics, and the proportion of different age groups in various subway stations, the advertisement type of each subway station was obtained and not limited to one type. Table 7 provides the advertisement types. Comparison of the Original and New Schemes. e calculations clarify the optimal advertising quotation for each subway station. In contrast to the original quotations (shown in Figures 4 and 5), the new scheme increases the profit. Comparison of the original and new advertising quotation schemes demonstrates that the new quotation is significantly higher than the original quotation, with striking rises at some subway stations. is outcome is the result of refining the influencing factors of advertising quotations, which significantly differentiates the quotations. As can be seen in Figure 4, for several stations with large growth rates, Shanghai Railway Station and Shanghai South Railway Station are all integrated transportation hubs. Xujiahui and People's Square are the intersections of multiple subway lines, so they all have the characteristics of large traffic. Moreover, the percentage of senior passengers (over 50 years) at these stations is very low, and some are even 0. Even of the same advertising grade, the overall passenger flow volume impacts those stations' advertising quotation at different time periods, with passenger age structures, and based on the number of transfer lines. For example, although Xinzhuang and Caobao Rd. are of the same advertising grade (A++) and have the same number of transfer lines, but Xinzhuang has much larger daily overall passenger flow volume than Caobao Rd. As such, the new advertising quotation for Xinzhuang is higher. e stations with Grade A advertising were also impacted by daily overall passenger flow volumes; the increase in advertising quotation was proportional to the increase in daily overall passenger flow volume. Although the daily advertising costs of most stations have increased, for companies, they only need to purchase advertising services during certain periods of time, so some companies' advertising costs actually have decreased. e highest optimized advertising price cannot exceed the advertising price corresponding to the pedestrian flow when the subway is fully loaded. e new advertising quotation fully taps the surplus value of customers. Observing the new advertising system as a whole shows that the advertising company will make more profit. Table 8 will show it. Meanwhile, as for the advertisers, the new advertising scheme will increase the advertising effect. e advertising effect is defined as the number of effective customers that each yuan attracts for an hour. According to data from the China Statistical Yearbook, the average growth rate of per capita health care expenditure of residents in China in the past five years has reached 13%, while the average growth rate of total consumer spending has been 8%. As the health industry is currently booming, the public's attention to this field is increasingly, so the proportion of health advertisements will increase in the future. erefore, health advertisements are selected for analysis. Take the male aged from 18 to 40 years who are interested in health advertisements as an example. We can find that the advertising effect has increased from 0.0059 to 0.0077. e effective passenger flow volume here refers to male passenger aged from 18 to 40 years and interested in health advertisements. As the original advertising quotation is designed to cater to everyone, the total advertising quotation is the sum of every station of Shanghai Line 1 in one service day. However, the new advertising scheme has made recommended advertisement type in different time periods for each station, so the total advertising quotation is the value of station that displays health advertisements. e detailed information is listed in Tables 9 and 10. Due to the lack of certain data, the study could not formally classify the passengers' age structure; however, the results do demonstrate the influence of this variable. For example, Fujin Rd. and North Zhongshan Rd. are very similar, except for the passengers' age structure. However, Fujin Rd. resulted in a lower quotation than North Zhongshan Rd., which also demonstrates that the age structure has a significant influence on the advertising quotation. e number of lines also makes a difference in the new quotation. For example, among the stations classified as Grade S, the advertising quotation for South Huangpi Rd. was significantly lower than the others. is is mainly because South Huangpi Rd. has only one line, whereas the other three stations all have three lines crossing. In this case, while the daily passenger flow volume of South Shaanxi Rd. is smaller than the flow volume of South Huangpi Rd., the final advertising quotation was inversely proportional to daily passenger flow volume based on the impact of the number of transfer lines. Figures 4 and 5 show that the advertising quotations are no longer limited to station grade; Original advertising quotation New advertising quotation One service day 267208 846860 however, some stations with low advertising grades had significantly larger daily passenger flow volume than stations with large high advertising grades. However, the current quotation does not display this advantage, making the pricing unreasonable. e neural networks training function allows the new quotation to address as many influencing factors as possible, eliminating the problems in the existing quotation scheme. Application and Popularization. e analysis makes it clear that multiple variables influence subway advertising quotations and not just the station grade. us, it is important to consider the subway passengers' features, as well as the subway station itself. e neural networks scientifically links the subway advertising quotation with daily overall passenger flow volume, station transfer lines, and passengers' age structure. Meanwhile, the case study helps reveal that the original quotation and advertisement types are unreasonable. ere are only 3 station grades; therefore, there are also only 3 grades of advertising quotations. However, the status quo has not been thoroughly considered, resulting in less profitability for stations with comparatively high daily overall passenger flow volumes. Furthermore, the existing advertising is in the form of a fulltime display model, resulting in unbalanced advertising profit. In terms of advertisement type, it is currently uncommon to involve an analysis of passengers' age structure in certain subway stations, thus diminishing the subway advertising effect. To address these problems, this study developed a method for optimizing subway advertising using BP neural networks training. e core idea is to consider all the factors influencing subway advertising quotations, including passenger flow volume during a specific time period, the number of transfer lines in a station, the passengers' age structure, and the original advertising quotation. By researching the weights of these factors, this study gradually optimized the degree of fit between network predictions and reality. is ultimately provides a new target quotation, based on network trials to ensure the reasonableness of the quotation. en, the analysis of the passengers' age structure helps identify the best advertisement types to cater to different passenger groups' preferences. Finally, after getting the new optimized advertising scheme and the recommended advertisement type for each subway stations during different time periods, the electronic advertising board is proposed to employ the new subway advertising scheme. By using the electronic advertising board, it is possible to alternate multiple advertisements within one board for different passenger groups in a whole service time, which considers the preferences of different passenger groups of each time period. As for the advertisers, they can choose the most suitable period (maybe one or two periods) for advertising in accordance with the new advertising scheme instead of the current single-track full-time advertising. is may not only reduce the advertising cost but also maximize effect of advertising. As for advertisement agencies, this kind of service will definitely increase their competitiveness. Employing the time-of-use pricing system can make greater profits as a whole after the total quotation has increased, which is confirmed in the last subsection. Also, employing the recommended advertisement types will attract more advertisers and obtain some extra service fee. is method can be applied to better understand subway advertising and may also be applied to optimize the advertising quotations and advertisement types at bus stations, the walls surrounding huge buildings, shopping malls, and other locations. In terms of shop locations, this method could also be applied to analyze passenger flow features in the area to aid in selecting the best location. Conclusion is paper studied the possible improvements in advertising profitability based on the transportation data of both shared bikes and subway systems. A new differentiated quotation scheme for subway advertising to increase profitability was proposed. e big data research helped estimate consumers' surplus. Using the electronic advertising board, it is possible to obtain the best advertising effect with lowest cost and increase profits for advertising agencies. To optimize price quotations, a neural networkbased function was developed. e main influencing factors included overall passenger flow volume at different subway stations at different time periods, the number of transfer lines of each station, the passengers' age structure at each station at different time periods (4 age brackets), and the existing subway advertising quotations. BP neural networks training automatically generates a "reasonable rule" between data input and output and automatically memorizes the content into the network value, to ensure the advertising quotation can reflect all the factors. e results show that all the optimized quotations were higher than the existing ones and varied among the stations. In this way, the network demonstrated all the factors' positive or negative influences on the quotations. Shanghai Metro Line 1 was selected to implement the case study on optimizing subway advertising quotations. e method is worth popularizing, as the "globalization" of shared bikes means that this method can be applied with other metro lines and other advertising modes to make price quotations more reasonable and scientific. Finally, increasing the comprehensive profits from subway advertising can relieve the financial pressure on subway operations. To close this paper, we review some minor limitations. is study is conducted based on the "shared bike plus subway" travel modes. For cities without shared bikes, this method might be difficult to be applied, mainly due to the lack of certain data. To make the ranking standard uniform, this study completed some conversions, possibly decreasing the accuracy. Also, the division of passenger age groups is limited to the raw data, which is why it may lose some characteristics, and may be improved in future studies. Despite of these aspects, this paper may help boost subway advertising stakeholders' profits and better regulate the subway advertising market to offer more reasonable quotations. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
The Microbiota and Cancer Cachexia Cancer cachexia is a multifactorial syndrome defined by weight loss, muscle wasting, and systemic inflammation. It affects the majority of patients with advanced cancer and is associated with poor treatment response, early mortality and decreased quality of life. The microbiota has been implicated in cancer cachexia through pathways of systemic inflammation, gut barrier dysfunction and muscle wasting. The imbalance of the microbiota, known as dysbiosis, has been shown to influence cancer cachexia. Bacteria that play beneficial and detrimental roles in the disease pathogenesis have been identified. The phenotype of cancer cachexia is associated with decreased levels of Lactobacillales and increased levels of Enterobacteriaceae and Parabacteroides. Currently, there are no treatment options that demonstrate increased survival or the quality of life in patients suffering from cancer cachexia. Through the manipulation of beneficial bacteria in the gut microbiota, different treatment options have been explored. Prebiotics and probiotics have been shown to improve outcomes in animal models of cachexia. Expounding on this mechanism, fecal microbiota transplant (FMT) holds promise for a future treatment of cancer cachexia. Further research is necessary to address this detrimental disease process and improve the lives of patients suffering from cancer cachexia. Introduction Cachexia is a multifactorial syndrome defined by weight loss greater than 5%, weight loss greater than 2% in those who have a BMI < 20 kg/m 2 or depletion in skeletal muscle mass [1]. Cachexia is seen in approximately 80% of patients with advanced cancer and has been reported to contribute to 30% of cancer deaths [2]. The clinical phenomenon of cachexia has been identified for centuries, originally documented by Hippocrates as "the flesh is consumed, the shoulders, clavicles, chest and thighs melt away . . . the illness is fatal" [3]. Modern research has elucidated a complex interplay of cytokines, inflammation and metabolic derangement. Cachexia includes not only weight loss, but also adipose tissue wasting, muscle atrophy, and decreased appetite, with metabolic dysfunction preceding these physical signs [2,4]. Cachexia has been noted in many chronic inflammatory conditions such as acquired immunodeficiency syndrome (AIDS), sepsis, autoimmune disorders, chronic lung disease and cancer. Cancer cachexia is particularly devastating as it is predictive of early mortality, poor response to chemotherapy and can even be a direct cause of death [5]. In addition to quantity, cancer cachexia affects the quality of patients' lives. Muscle and adipose wasting along with progressive anorexia can be particularly distressing to patients and their family members. Early interventions such as exercise, nutritional supplementation, counseling and medications have been trialed to no avail. A number of different pharmacologic therapies have been tested with a focus on appetite stimulants, anabolic agents and metabolic inhibitors. Short courses of corticosteroids, progesterone analogs and more recently cannabinoids have been used to enhance appetite stimulation in patients with cancer cachexia. Anabolic steroids and recombinant growth hormone have been evaluated for their theoretical mechanism of action in muscle anabolism; however, no statistically significant benefit has been discovered [6]. Metabolic inhibitors have also been under investigation for their proposed benefit in decreasing systemic inflammation. Eicosapentaenoic acid (EPA) has been suggested to decrease IL-6, but studies have not consistently shown improvement. Additionally, TNF-α antibodies have not shown any statistically significant benefit in humans despite their mechanism [7]. Overall, pharmaceutical interventions have not shown a significant survival benefit or an improvement in quality of life in patients with cancer cachexia [4]. Emerging research has highlighted the role of the microbiota and its influence on the pathogenesis of cancer cachexia. By definition, the human microbiota are the bacteria, fungi, protozoa and viruses coexisting within the human body. The microbiota is composed of 100 trillion microorganisms, far outnumbering the cells of the human body. A common misconception, the term "microbiome" is defined as the genetic material that makes up these organisms [8]. Recent advances in DNA sequencing technology have allowed for exploration into the previously unknown and underappreciated world of the microbiome. Bacteria, fungi and viruses that could not previously be recognized with standard culturing techniques are now identifiable and can be separated into their taxonomies using high throughput DNA sequencing. Bacteria and archaea are sequenced based on their 16S rRNA subunit and fungal components are differentiated based on their internal transcribed spacer (ITS), 18S rRNA, or 26S rRNA regions [9] The metabolic activity of the microbiota has been evaluated on a multitude of scales ranging from a single cell to the epidemiology of their hosts. At the level of a single cell, flow cytometry, fluorescent in situ hybridization (FISH), single cell mass spectrometry and other techniques have been used to elucidate cellular enzymatic activity, gene content and growth rate. These techniques combined with metagenomics (study of genomes within the microbial community), metatranscriptomics (study of gene expression within the microbial community) and metabolomics (study of small molecules released by a microbial community) allow for novel understandings of microbial metabolism and environmental interactions [10]. Through symbiosis, the microbiota and host have evolved to become a "super-organism" with intertwining processes of nutrition and metabolism [8]. Nearly all of the microbiota reside within the alimentary tract though it has been shown to influence the human body both locally and systemically. Beyond the gastrointestinal tract, associations exist between the oral, skin, respiratory and genitourinary microbiota and the pathogenesis of malignancies [11][12][13][14][15][16][17]. Furthermore, the microbiota has been shown to be intimately involved with the modulation of cancer treatment. In their review, Alexander et al. proposed the "TIMER" mechanistic framework to detail the influence of the microbiota on cancer treatment modalities [18]. This acronym describes the Translocation, Immunomodulation, Metabolism, Enzymatic degradation, Reduced diversity and ecological variation. Interactions between the microbiota and cancer therapies have been proven in multiple human, animal and in vitro studies and provide an exciting new prospect for personalized medicine [18]. Alternatively, the progression of cancer has been shown to alter the makeup of the microbiota. New studies have shown that chemotherapy induced mucositis has been proven to directly influence taxonomic shifts of bacteria [19]. Current research has focused on the influence of the microbiota in carcinogenesis. However, few studies exist on the role of the microbiota in cancer cachexia. This review centers on the current literature available on the influence and interactions of the microbiota and cancer cachexia. It discusses the mechanisms of the microbiota implicated in cancer cachexia and dysbiosis ( Figure 1). It further highlights the positive outcomes of restoring beneficial bacterial flora and potential targets for the treatment of cancer cachexia. Systemic Inflammation Cancer cachexia is a multifaceted clinical syndrome that is characterized by disequilibrium of inflammation, gut permeability and muscle wasting [13,20]. Systemic inflammation is a hallmark of cancer-associated cachexia and when compared to controls, cachectic subjects have a higher level of inflammatory markers. In patients with cancer cachexia, elevations of acute phase reactants, such as C-reactive protein (CRP), fibrinogen and inversely albumin, have been correlated with disease progression, decreased survival and poor quality of life [21]. Pro-inflammatory cytokines have been implicated in a broad spectrum of cancer types, not defined by a single organ. Causality of cachexia has been established in interleukin (IL)-6, IL-1 and tumor necrosis factor (TNF)-α when injected systemically in animal models. TNF-α has been shown to directly cause muscle breakdown through the induction of the ubiquitin-proteasome system (UPS). It causes metabolic derangement through proteolysis and decreased synthesis of protein and lipids [22]. IL-6 has also emerged as an influential cytokine in the pathogenesis of cancer cachexia. IL-6 is multifunctional and is involved in wound healing and tissue regeneration. Conversely, IL-6 has been correlated with tumorigenesis, muscle wasting and decreased survival time in patients with cancer cachexia [23]. Patients with advanced cancer have elevated levels of IL-6 which correlate to weight loss, anemia and depression. Recent trials with toculizumab, a humanized IL-6 receptor antibody, have shown promise in reducing muscle atrophy in recent trials, but the overall safety of this drug continues to require further investigation [24]. Microbiota and systemic inflammation are linked. The role of the microbiota and its influence on inflammation has been identified in a number of disease states including obesity, insulin resistance, cardiovascular disease, inflammatory bowel disease, asthma and carcinogenesis. Recent studies have focused on obesity and the concept of "metainflammation", which refers to the concept Systemic Inflammation Muscle Wasting Systemic Inflammation Cancer cachexia is a multifaceted clinical syndrome that is characterized by disequilibrium of inflammation, gut permeability and muscle wasting [13,20]. Systemic inflammation is a hallmark of cancer-associated cachexia and when compared to controls, cachectic subjects have a higher level of inflammatory markers. In patients with cancer cachexia, elevations of acute phase reactants, such as C-reactive protein (CRP), fibrinogen and inversely albumin, have been correlated with disease progression, decreased survival and poor quality of life [21]. Pro-inflammatory cytokines have been implicated in a broad spectrum of cancer types, not defined by a single organ. Causality of cachexia has been established in interleukin (IL)-6, IL-1 and tumor necrosis factor (TNF)-α when injected systemically in animal models. TNF-α has been shown to directly cause muscle breakdown through the induction of the ubiquitin-proteasome system (UPS). It causes metabolic derangement through proteolysis and decreased synthesis of protein and lipids [22]. IL-6 has also emerged as an influential cytokine in the pathogenesis of cancer cachexia. IL-6 is multifunctional and is involved in wound healing and tissue regeneration. Conversely, IL-6 has been correlated with tumorigenesis, muscle wasting and decreased survival time in patients with cancer cachexia [23]. Patients with advanced cancer have elevated levels of IL-6 which correlate to weight loss, anemia and depression. Recent trials with toculizumab, a humanized IL-6 receptor antibody, have shown promise in reducing muscle atrophy in recent trials, but the overall safety of this drug continues to require further investigation [24]. Microbiota and systemic inflammation are linked. The role of the microbiota and its influence on inflammation has been identified in a number of disease states including obesity, insulin resistance, cardiovascular disease, inflammatory bowel disease, asthma and carcinogenesis. Recent studies have focused on obesity and the concept of "metainflammation", which refers to the concept of chronic systemic inflammation and metabolic dysfunction [25]. Cani et al. described that mice with high-fat diets had elevated levels of intestinal Firmicutes and Proteobacteria [26]. They exhibited increased intestinal permeability, greater lipopolysaccharide (LPS) serum concentration and endotoxemia [26]. Systemic inflammation occurred through the activation of Toll-like receptors (TLR) and an overproduction of IL-1β, IL-6, IL-8 and TNF-α [27]. Carvalho et al. aimed to evaluate the affect of antibiotic treatment on subclinical inflammation and its clinical manifestations [28]. In their study design, they used mice fed with high-fat diets and subjected one arm to treatment with antibiotics (ampicillin, neomycin and metronidazole for 8 weeks). Then, they analyzed mouse feces, blood samples and biopsies of liver, adipose and muscle tissue. Using metagenome analysis, they found significant alterations in the microbiota with a reduction in Bacteriodetes and Firmicutes. Additionally, mice treated with antibiotics showed decreased levels of circulating LPS, decreased levels of fasting glucose, insulin, TNF-α and IL-6 [28]. The link between the human microbiota, systemic inflammation and cancer cachexia has yet to be fully elucidated in current research. Further studies are needed to identify the pathways and mechanisms involved in the pathogenesis and clinical presentation of cancer cachexia. The focus of future research should aim to relate the alterations in the microbiota and systemic inflammation in patients with cancer cachexia. Gut Barrier Dysfunction The lining of the gastrointestinal tract serves as a barrier between the internal milieu and luminal contents, which includes the bacteria, fungi, protozoa and viruses that make up the microbiota. It serves to protect the body from intraluminal pathogens through the recognition of foreign microbes and secretion of antibodies, antimicrobial peptides and mucus [29]. Its defense is threefold. It includes a biologic barrier made up of microbiota, an immune barrier made of gastrointestinal immune cells and the mechanical barrier between the intestinal epithelial cells and capillary endothelial cells [30]. Gut barrier permeability of the small and large intestine is regulated by cell junctions (tight junctions, adherens junctions and desmosomes) that connect epithelial cells. Tight junctions, in particular, have been shown to have a dynamic response to a variety of factors, both extrinsically and intrinsically [31]. External factors such as drugs, cytokines and chemicals affect the permeability of the gut barrier [32][33][34]. More recently attention has been turned to the internal factors influencing gut barrier dysfunction, specifically the effects of the microbiota. Current literature has hypothesized that resident microbiota cause gut barrier dysfunction leading to increased translocation of bacterial toxins and subsequent systemic inflammation. It has been proposed that bacterial endotoxins such as LPS are able to seep through a more permeable gut barrier and into the bloodstream. As noted above, this leads to systemic inflammation and potentially the clinical presentation of cancer cachexia [35]. Puppa et al. demonstrated evidence of this correlation in their study of Apc Min/+ cachexia animal model. The Apc Min/+ mouse is an animal model with a nonsense point mutation in the tumor suppressor gene Apc that causes colonic tumor development. It is used as a model for cancer cachexia as mice begin to exhibit progressive weight loss when tumor burden progresses. Gut barrier dysfunction was measured by evaluating permeability to neutral hydrophilic polymers. When compared to controls, Apc Min/+ mice demonstrated increased gut barrier permeability. The timing of onset of gut barrier dysfunction correlated with both the onset and progression of cancer cachexia. Serum concentrations of IL-6 also increased in parallel with worsening cachexia and gut barrier permeability. Measurements of endotoxemia were fivefold higher in severely cachectic mice. Authors further described a number of major shifts in the metabolism of Apc Min/+ mice, including sequelae of hypothermia, hypertriglyceridemia and insulin resistance. These metabolic changes are also frequently noted in patients with progressive cachexia [36]. Jiang et al. performed a comparative study of human patients with gastric adenocarcinoma to further evaluate the relationship between cancer cachexia, microbiota, intestinal barrier dysfunction, bacterial translocation and systemic inflammation [37]. They evaluated the changes in microbial contents, structural basis of tight junctions, and inflammatory cytokines. They studied the differences between cachectic patients and noncachectic patients using a unique population of patients with gastric cancer involving the transverse mesocolon. Cachectic patients were defined using clinical criteria of weight loss >10% of preillness weight and CRP >10 mg/L. They used a "sugar-drink test" to measure the gradient in urine as a marker for intestinal permeability. Intraoperative samples of the wall of the large intestine, mesenteric lymph nodes and blood samples from the middle colic, portal and peripheral veins were obtained. Fecal samples were obtained prior to any intervention. Their results indicated that cachectic patients with gastric adenocarcinoma had higher levels of intestinal barrier dysfunction with increased levels of claudin (channel-forming) transmembrane proteins and decreased levels of occludin transmembrane proteins. Cachectic patients exhibited a higher level of bacterial translocation, increased systemic inflammatory cytokines and significant differences in the diversity of intestinal flora, although individual bacterial species were not identified [37]. Bindels et al. further evaluated the interactions between cancer cachexia, gut barrier dysfunction and the microbiome through both human and animal studies [38]. Their mouse model for cancer cachexia was generated by the ectopic transplantation of C26 colon carcinoma cells. Using control mice, they evaluated the pathologic changes occurring in cancer cachexia. They found significant evidence of alterations in intestinal homeostasis in C26 cachectic mice. This is best demonstrated by an overall decrease in intestinal tissue weight, increased villi length and crypt depth and increased gut permeability with increased claudin proteins. Microbial composition was also altered with an increase in Enterobacteriaceae. Increased gut permeability has been associated with an increase in proinflammatory bacterial translocation. The acute phase reactant lipopolysaccharide-binding protein (LBP) was used as a marker for translocation and increased exogenous antigen load. This study showed an increase in both IL-6 and LBP correlating to the manifestation of cancer cachexia. In search of the underlying mechanisms to gut barrier dysfunction, they pair-fed control mice to evaluate the influence of anorexia. They found that intestinal barrier dysfunction and microbial changes in C26 mice could not be attributed to anorexia. Interestingly, they administered an anti-IL-6 antibody and evaluated the downstream effects. In the C26 model, it not only prevented alterations in the microbiota but also improved weight loss, muscle atrophy and food intake. However, there was a slight increase in tumor size. To evaluate this existing data in humans, a prospective cross-sectional study on patients with lung and colon cancer was performed. Serum measurements of IL-6 and LBP were obtained in cachectic and noncachectic patients. They found that cachectic patients demonstrated elevations in serum IL-6 and LBP. Additional multivariate analysis revealed that increased LBP was significantly predictive of poor outcome in cancer patients. It was proposed that in the future LBP might be utilized as a biomarker in cancer cachexia [38]. Given this data, it is conceivable that the immune dysfunction may be both a cause and an effect of intestinal dysfunction and subsequent bacterial translocation. Further research is needed to elucidate the intricacies of the vicious cycle of gut barrier dysfunction, dysbiosis, bacterial translocation and systemic inflammation. Muscle Wasting The loss of skeletal muscle is a key factor in the development of cancer cachexia. A decrease in muscle mass is associated with a loss in independence and overall quality of life. The main function of muscle tissue is skeletal stabilization, but it also plays a role in macronutrient storage and the excretion of cytokines. Recent studies have examined the endocrine nature of skeletal muscle and the cross-talk between other organ systems. As current literature continues to evolve, focus has shifted toward exploring the notion of a gut-muscle axis. Recently, Nay et al. evaluated the interactions between the gut microbiota and muscle function [39]. They exposed mice to 21 days of broad-spectrum antibiotics and studied how it altered skeletal muscle function. They found that mice treated with antibiotics had decreased endurance and increased muscle fatigue. Mice resumed their function after natural bacterial reseeding. Notably, these changes were not associated with changes in muscle mass, muscle composition or mitochondrial function. However, the authors found a parallel relationship to G protein-coupled receptor 41, sodium glucose cotransporter 1 and muscle glycogen levels. These results support an interaction between the gut microbiota, glucose metabolism/storage and muscle function [39]. Frost et al. explored the influence of LPS on cytokine stimulation in skeletal muscle [40]. They showed that LPS modulates the secretion of inflammatory cytokines, specifically TNF-α and IL-6, from muscle cells both in vitro and in vivo. Using C2C12 myoblasts, they monitored the cytokine response from within these cell types. They found that administration of LPS resulted in an increase of TNF-α and IL-6 from myoblasts. These results were blunted in mice with a TLR-4 mutation, suggesting its involvement in this process [40]. Additional studies have exhibited a correlation between circulating IL-6 and the suppression of protein synthesis and muscular atrophy. Haddad et al. evaluated the direct effects of IL-6 on muscle [41]. Administration of IL-6 resulted in overall muscular atrophy and specifically loss of myofibrillar protein. They proposed that this consequence occurred due to downregulation of growth factor-mediated intracellular signaling [41]. TNF-α has also been shown to accelerate apoptosis in muscle cells. Li et al. evaluated the effect of TNF-α on muscle cells both in vitro and in vivo [42]. They found that TNF-α upregulates atrogin1/MAFbx gene expression, which increases muscle wasting. This effect was noted to take effect within 2 h in C2C12 myotubes and within 4 h in mouse skeletal muscle [42]. Put into context with the aforementioned data regarding gut barrier dysfunction and systemic inflammation, the direct activation of muscular and systemic cytokines by LPS may be another step to further explain the interworking of the gut microbiota-muscle axis. Dysbiosis Diversity in the gut microbiota has become increasingly recognized within the last ten years. The popularity of this topic has intensified as investigators continue to discover new applications of the field. The microbiota is not a stagnant physiologic feature, but a dynamic aspect of the human body. Alterations of microbial composition have been shown to occur primarily through the first three years of life and then again later in life, after the age of 65 [43]. Throughout the human lifespan, the microbiota is primarily comprised of the Bacteroidetes and Firmicutes bacterial species [44]. The Human Microbiome Project is currently working to sequence the microbial genome and understand the role of the microbiome in health and human disease. Their goal is to determine whether humans share a core microbiome and to evaluate how deviation for the norm correlates with different disease states [45]. A landmark study by Bäckhed et al. first identified how dysbiosis effects host metabolism [46]. They evaluated the response of germ-free mice to the initiation of the Western-style, high-fat diet and concluded that the microbiota influences the host metabolism. Specifically, they found that germ-free mice exhibited elevated skeletal muscle and liver levels of AMP-activated protein kinase (AMPK) and elevated levels of fasting-induced adipose factor. Additional studies have highlighted the involvement of the microbiota in amino acid bioavailability and diversifying the bile acid profile [46]. Bindels et al. have performed several studies over the last decade to further elucidate the role of the microbiota in cancer cachexia. Using a mouse model of leukemia, they were able to define the differences in microbial species between cachectic and noncachectic mice. Cells containing Bcr-Abl ectopic expression were transplanted into mice and they subsequently developed the clinical picture of cancer cachexia. DNA sequencing of the microbiota was then performed using 16s rRNA subunit analysis. When compared to controls, cachectic mice exhibited a fifty-fold decrease in Lactobacillus species, particularly in L. reuteri and L. johnsonii/gasseri. Levels of Enterobacteriaceae and Parabacteroides goldsteinii were found to be increased [47,48]. In a separate study, they further detailed the deleterious effects of Enterobacteriacea and specified Klebsiella oxytoca as a gut pathobiont. A pathobiont is defined as a potentially pathogenic bacterium that is not harmful in normal settings. They sought to answer the question why cachectic mice exhibited a lower colonization resistance to K. oxytoca and identified the role of host-derived nitrate in the reduction of PPAR-γ signaling. This perpetuates the growth of K. oxytoca [49]. Evaluation of dysbiosis has not been limited to the gut. Li et al. analyzed the skin flora microbiota in patients with cancer cachexia and compared it to normal controls [14]. Though confounding factors may be present, they found that patients with cancer cachexia had significantly reduced Corynebacterium species in their skin microbiota when compared to healthy patients [14]. Overall, identification of microbial disturbances in patients suffering from cancer cachexia allows for further targeted research. In combination with the DNA sequencing collection of the human microbiome project, potential for new therapeutic opportunities exist in the treatment of cancer cachexia. Probiotics Probiotics have become an increasingly popular research focus in the last three decades. Defined as "live microorganisms that, when administered in adequate amounts, confer a health benefit on the host," probiotics have been found to have a multitude of effects both locally and systemically in the human body [50]. The study of probiotics and cancer cachexia is still in its infancy but intriguing new evidence has been proposed. Varian et al. recently evaluated the role of L. reuteri as a probiotic in cancer cachexia [51]. Using the widely accepted Apc MIN/+ mouse model of colon cancer cachexia, they administered L. reuteri via drinking water. They found that administration of this lactic-acid Gram-positive bacterium was associated with larger gastrocnemius muscle mass and decreased evidence of muscle atrophy. Oral L. reuteri administration was also associated with an increased lifespan, larger thymus and a decrease in FoxN1 expression, a transcription factor involved in systemic inflammation [51]. Bindels et al. has led the field in probiotic trials in patients with cancer cachexia [47]. In the aforementioned study, they noted a decrease in the levels of L. reuteri and L. johnsonii/gasseri in a leukemia mouse model of cachexia. Upon supplementation with oral L. reuteri, these mice exhibited reduced expression of muscle atrophy markers, particularly in the gastrocnemius and tibialis muscles. They also demonstrated a reduction in systemic inflammatory cytokines (IL-6, monocyte chemoattractant protein-1, IL-4, granulocyte colony-stimulating factor) [47]. In additional studies, they continued to elaborate on this evidence by utilizing a symbiotic approach using both a probiotic (L. reuteri) and prebiotic (inulin-type fructans). They evaluated the effect of this combination in both colon cancer (C26) and leukemic (BaF) mouse models of cancer cachexia. They found that administration of this combination was associated with decreased cancer cell proliferation and muscle wasting. Furthermore, mice treated with L. reuteri and inulin-type fructans showed a decreased morbidity and prolonged survival [48]. The study of probiotics continues to proliferate as it hold promise to become a new treatment modality for patients with cancer cachexia. By way of many new emerging medical treatments, the safety of probiotics in cancer patients will need to be thoroughly evaluated. As this new area of research continues to proliferate, the safety of probiotics in cancer patients has been examined. Redman et al. performed a systematic review of seventeen randomized controlled studies involving the safety of probiotics in cancer patients [52]. They reported a large variety of adverse effects, including an increased risk of sepsis in patients with malignancies. Given the patients' predisposition for adverse effects from malignant progression, there has been insufficient evidence to support direct causality [52]. Additional studies are needed to evaluate the safety and efficacy of probiotics in patients with cancer cachexia prior to their clinical implementation Prebiotics Many different avenues of altering the gut microbiota have been explored. In this pursuit, the idea of a prebiotic was developed. Prebiotics are described as a "nondigestible food ingredient that beneficially affects the host by selectively stimulating the growth and/or activity of one or a limited number of bacteria in the colon, and thus improves host health." Bindels et al. specifically evaluated the influence of pectic oligosaccharides (POS) and inulin (INU) on the effect of cancer cell proliferation in their leukemic mouse model of cachexia [53]. In comparing the two prebiotics, they found that POS decreased metabolic alterations, delayed anorexia and reduced fat mass loss. However, it did not affect the rate of hepatic cancer cell invasion. Conversely, the use of INU in this mouse model exhibited a decrease in hepatic cancer cell invasion [53]. Huang et al. evaluated this concept using the Apc MIN/+ mouse model of colon cancer [54]. They administered triterpene saponins (specifically ginsenoside-Rb3 and ginsenoside-Rd) orally to mice and evaluated the downstream effect on gut epithelium, inflammatory markers and microbiota. Though they did not specifically evaluate the changes in body weight, they reported that mice receiving ginsenoside-Rb3 and ginsenoside-Rd had improvement in their gut epithelium and a decrease in pro-inflammatory markers and cachexia-associated bacteria [54]. Though this research is promising, there is a paucity of human data and further studies are needed to make definitive conclusions. Fecal Transplantation Fecal microbiota transplantation (FMT) is defined as "the engraftment of microbiota from a healthy donor into a recipient, which results in restoration of the normal gut microbial community structure." The technology of FMT has existed for over 50 years, but recent research in this therapy has increased exponentially [55]. The majority of studies have focused on the treatment utility for refractory Clostridium difficile infection and historically, FMT has been administered using fecal enemas, nasoduodenal tubes or colonoscopy. The safety of this process has recently been called into question, particularly in immunocompromised patients. Adverse events following FMT are reported to be 28.5% with the majority of complications associated with abdominal pain and discomfort. Serious adverse reactions such as infection, peritonitis and disease relapse are not uncommon at 9.2%. It is important to note that the risk of death was estimated to be 3.5% [56]. However, when compared with FMT in immunocompentent patients, there was no statistical difference in adverse events and this study was likely confounded by the original indications for FMT. Wang et al. performed a systematic review that analyzed 50 publications and a total of 1089 immunocompromised patients that received FMT [57]. This involved a wide variety of patients, including those with HIV, AIDS, solid organ transplant and malignancy. They concluded that there appears to be no increased risk of FMT administration in immunocompromised patients, but data on long-term outcomes is not available [57]. Theoretically, FMT may be a safe option in the treatment of cancer cachexia, but additional research is needed in order to evaluate its overall safety and efficacy. Conclusions Cachexia remains a devastating problem for cancer patients. This clinical diagnosis affects their overall treatment course, survival and quality of life. Cancer cachexia is a complex manifestation of systemic inflammation, gut barrier dysfunction and muscle wasting. The initial catalyst of this deadly triad remains unidentified. Within the last two decades, studies have aimed to define new treatment strategies for combating cancer cachexia. Currently, no medical therapy in clinical practice has provided a meaningful survival benefit or improvement in quality of life. Studies on probiotics and prebiotics have been successful in mouse models but remain in their infancy. They hold the promise of providing future treatment options for patients with cancer cachexia. The technology of FMT has been available for a number of years, but research on this treatment modality has recently flourished. Based on its mechanism of restoring the microbial structure, FMT has the possibility to provide a new and innovative treatment strategy for patients with cancer cachexia. Due to the novelty of these therapeutic options, long-term safety profiles have yet to be described in the literature. Further studies on their safety profiles in patients with cancer cachexia will need to be explored. Many unanswered questions exist regarding the interplay of the microbiota and cancer cachexia. Unearthing the additional components of these local and systemic interactions is critical. New solutions for the treatment of this disease process have the potential to prolong survival and ultimately, improve the quality of life for patients suffering from cancer.
VC density of set systems defnable in tree-like graphs We study set systems definable in graphs using variants of logic with different expressive power. Our focus is on the notion of Vapnik-Chervonenkis density: the smallest possible degree of a polynomial bounding the cardinalities of restrictions of such set systems. On one hand, we prove that if $\varphi(\bar x,\bar y)$ is a fixed CMSO$_1$ formula and $\cal C$ is a class of graphs with uniformly bounded cliquewidth, then the set systems defined by $\varphi$ in graphs from $\cal C$ have VC density at most $|\bar y|$, which is the smallest bound that one could expect. We also show an analogous statement for the case when $\varphi(\bar x,\bar y)$ is a CMSO$_2$ formula and $\cal C$ is a class of graphs with uniformly bounded treewidth. We complement these results by showing that if $\cal C$ has unbounded cliquewidth (respectively, treewidth), then, under some mild technical assumptions on $\cal C$, the set systems definable by CMSO$_1$ (respectively, CMSO$_2$) formulas in graphs from $\cal C$ may have unbounded VC dimension, hence also unbounded VC density. 1 Introduction VC dimension. VC dimension is a widely used parameter measuring the complexity of set systems. Since its introduction in the 70s in the seminal work of Vapnik and Chervonenkis [18], it became a fundamental notion in statistical learning theory. VC dimension has also found multiple applications in combinatorics and in algorithm design, particularly in the area of approximation algorithms. The original de nition states that the VC dimension of a set system F = (U, S), where U is the universe and S is the family of sets, is equal to the supremum of cardinalities of subsets of U that are shattered by F. Here, a subset X ⊆ U is shattered by F if the restriction of F to X -de ned as the set system F[X] = (X, {S ∩ X : S ∈ S}) -is the whole powerset of X. In many applications, the boundedness of the VC dimension is exploited mainly through the Sauer-Shelah Lemma [15,17], which states that a set system F over a universe of size n and of VC dimension d contains only O(n d ) di erent sets. As a bound on VC dimension is inherited under restrictions, this implies that for every subset A of the universe, the cardinality of the set system F[A] is at most O(|A| d ). This polynomial bound on the sizes of restrictions distinguishes set systems with bounded VC dimension from arbitrary set systems, where the exponential growth is witnessed by larger and larger shattered sets. However, for many set systems appearing in various settings, the bound provided by the Sauer-Shelah Lemma is far from optimum: the degree of the best possible polynomial bound is much lower than the VC dimension. This motivates introducing a more re ned notion of the VC density of a set system, which is (slightly informally) de ned as the lowest possible degree of a polynomial bounding the cardinalities of its restrictions. See Section 2.1 for a formal de nition. The Sauer-Shelah Lemma then implies that the VC density is never larger than the VC dimension, but in fact it can be much lower. This distinction is particularly important for applications in approximation algorithms, where having VC density equal to one (which corresponds to a linear bound in the Sauer-Shelah Lemma) implies the existence of ε-nets of size O( 1 ε ) [1], while a super-linear bound implied by the boundedness of the VC dimension gives only ε-nets of size O( 1 ε log 1 ε ) (see e.g. [10]). This di erence seems innocent at rst glance, but shaving o the logarithmic factor actually corresponds to the possibility of designing constant-factor approximation algorithms [1]. De ning set systems in logic. In this work we study set systems de nable in di erent variants of logic over various classes of graphs. We concentrate on nding a precise understanding of the connection between the expressive power of the considered logic L and the structural properties of the investigated class of graphs C that are necessary and su cient for the following assertion to hold: L-formulas can de ne only simple set systems in graphs from C, where simplicity is measured in terms of the VC parameters. To make this idea precise, we need a way to de ne a set system from a graph using a formula. Let ϕ(x,ȳ) be a formula of some logic L (to be made precise later) in the vocabulary of graphs, wherex,ȳ are tuples of free vertex variables. Note here that the partition of free variables intox andȳ is xed; in this case we say that ϕ(x,ȳ) is a partitioned formula. Then ϕ de nes in a graph G = (V, E) the set system of ϕ-de nable sets: Here, Vx and Vȳ denote the sets of evaluations of variables ofx andȳ in V , respectively. In other words, everyv ∈ Vȳ de nes the set consisting of all thoseū ∈ Vx for which ϕ(ū,v) is true in G. Then S ϕ (G) is a set system over universe Vx that comprises all subsets of Vx de nable in this way. For an example, if |x| = |ȳ| = 1 and ϕ(x, y) veri es whether the distance between x and y is at most d, for some d ∈ N, then S ϕ (G) is the set system whose universe is the vertex set of G, while the set family comprises all balls of radius d in G. The situation when the considered logic L is the First Order logic FO was recently studied by Pilipczuk, Siebertz, and Toruńczyk [12]. They showed that the simplicity of FO-de nable set systems in graphs is tightly connected to their sparseness, as explained formally next. On one hand, if C is a nowhere dense 1 class of graphs, then for every partitioned FO formula ϕ(x,ȳ), ϕ de nes in graphs from C set systems of VC density at most |ȳ|. On the other hand, if C is not nowhere dense, but is closed under taking subgraphs, then there exists a partitioned FO formula that de nes in graphs from C set systems of arbitrarily high VC dimension, hence also arbitrarily high VC density. Note that one cannot expect lower VC density than |ȳ| for any non-trivial logic L and class C, because already the very simple formula α(x,ȳ) = |ȳ| i=1 (x = y i ) de nes set systems of VC density |ȳ| in edgeless graphs. Thus, in some sense the result stated above provides a sharp dichotomy. In this work we are interested in similar dichotomy statements for more expressive variants of logic on graphs, namely MSO 1 and MSO 2 . Recall that MSO 1 on graphs extends FO by allowing quanti cation over subsets of vertices, while in MSO 2 one can in addition quantify over subsets of edges. This setting has been investigated by Grohe and Turán [9]. They proved that if graphs from a graph class C have uniformly bounded cliquewidth (i.e. there is a constant c that is an upper bound on the cliquewidth of every member of C), then every MSO 1 formula de nes in graphs from C set systems with uniformly bounded VC dimension. They also gave a somewhat complementary lower bound showing that if C contains graphs of arbitrarily high treewidth and is closed under taking subgraphs, then there exists a xed MSO 1 formula that de nes in graphs from C set systems with unbounded VC dimension. Our contribution. We improve the results of Grohe and Turán [9] in two aspects. First, we prove tight upper bounds on the VC density of the considered set systems, and not only on the VC dimension. Second, we clarify the dichotomy statements by showing that the boundedness of the VC parameters for set systems de nable in MSO 1 is tightly connected to the boundedness of cliquewidth, and there is a similar connection between the complexity of set systems de nable in MSO 2 and the boundedness of treewidth. Formal statements follow. For the upper bounds, our results are captured by the following theorem. Here, CMSO 1 and CMSO 2 are extensions of MSO 1 and MSO 2 , respectively, by modular predicates of the form |X| ≡ a mod p, where X is a monadic variable and a, p are integers. Also, C 2 MSO 1 is a restriction of CMSO 1 where we allow only modular predicates with p = 2, that is, checking the parity of the cardinality of a set. Theorem 1. Let C be a class of graphs and ϕ(x,ȳ) be a partitioned formula. Additionally, assume that one of the following assertions holds: (i) C has uniformly bounded cliquewidth and ϕ(x,ȳ) is a CMSO 1 -formula; or (ii) C has uniformly bounded treewidth and ϕ(x,ȳ) is a CMSO 2 -formula. Then there is a constant c ∈ N such that for every graph G ∈ C and non-empty vertex subset A ⊆ V (G), In particular, this implies that for a partitioned formula ϕ(x,ȳ), the class of set systems S ϕ (C) has VC density |ȳ| whenever C has uniformly bounded cliquewidth and ϕ is a CMSO 1 -formula, or C has uniformly bounded treewidth and ϕ is a CMSO 2 -formula. Note that Theorem 1 provides much better bounds on the cardinalities of restrictions of the considered set systems than bounding the VC dimension and using the Sauer-Shelah Lemma, as was done in [9]. In fact, as argued in [9,Theorem 12], even in the case of de ning set systems over words, the VC dimension can be tower-exponential high with respect to the size of the formula. In contrast, Theorem 1 implies that the VC density will be actually much lower: at most |ȳ|. This improvement has an impact on some asymptotic bounds in learning-theoretical corollaries discussed by Grohe and Turán, see e.g. [9,Theorem 1]. For lower bounds, we work with labelled graphs. For a nite label set Λ, a Λ-v-labelled graph is a graph whose vertices are labelled using labels from Λ, while in a Λ-ve-labelled graph we label both the vertices and the edges using Λ. For a graph class C, by C Λ,1 we denote the class of all Λ-v-labelled graphs whose underlying unlabeled graphs belong to C, while C Λ,2 is de ned analogously for Λ-ve-labelled graphs. The discussed variants of MSO work over labelled graphs in the obvious way. Theorem 2. There exists a nite label set Λ such that the following holds. Let C be a class of graphs and L be a logic such that either (i) C contains graphs of arbitrarily large cliquewidth and L = C 2 MSO 1 ; or (ii) C contains graphs of arbitrarily large treewidth and L = MSO 2 . Then there exists a partitioned L-formula ϕ(x, y) in the vocabulary of graphs from C Λ,t , where t = 1 if (i) holds and t = 2 if (ii) holds, such that the family contains set systems with arbitrarily high VC dimension. Thus, the combination of Theorem 1 and Theorem 2 provides a tight understanding of the usual connections between MSO 1 and cliquewidth, and between MSO 2 and treewidth, also in the setting of de nable set systems. We remark that the second connection was essentially observed by Grohe and Turán in [9, Corollary 20], whereas the rst seems new, but follows from a very similar argument. As argued by Grohe and Turán in [9, Example 21], some mild technical conditions, like closedness under labelings with a nite label set, is necessary for a result like Theorem 2 to hold. Indeed, the class of 1subdivided complete graphs has unbounded treewidth and cliquewidth, yet CMSO 1 -and CMSO 2 -formulas can only de ne set systems of bounded VC dimension on this class, due to symmetry arguments. Also, the fact that in the case of unbounded cliquewidth we need to rely on logic C 2 MSO 1 instead of plain MSO 1 is connected to the longstanding conjecture of Seese [16] about decidability of MSO 1 in classes of graphs. Vapnik-Chervonenkis parameters In this section we brie y recall the main de nitions related to the Vapnik-Chervonenkis parameters. We only provide a terse summary of the relevant concepts and results, and refer to the work of Mustafa and Varadarajan [10] for a broader context. A set system is a pair F = (U, S), where U is the universe or ground set, while S is a family of subsets of U. While a set system is formally de ned as the pair (U, S), we will often use that term with a family S alone, and then U is implicitly taken to be S∈S S. The size of a set system is |F| := |S|. For a set system F = (U, S) and X ⊆ U, the restriction of S to X is the set system F[X] := (X, S ∩ X), where S ∩ X := {S ∩ X : S ∈ S}. We say that X is shattered by F if S ∩ X is the whole powerset of X. Then the VC dimension of F is the supremum of cardinalities of sets shattered by F. As we are mostly concerned with the asymptotic behavior of restrictions of set systems, the following notion will be useful. De nition 3. The growth function of a set system F = (U, S) is the function π F : N → N de ned as: Clearly, for any set system F we have that π F (n) 2 n , but many interesting set systems admit asymptotically polynomial bounds. This is in particular implied by the boundedness of the VC dimension, via the Sauer-Shelah Lemma stated below. Lemma 4 (Sauer-Shelah Lemma [15,17]). If F is a set system of VC dimension d, then Note that when the VC dimension of F is not bounded, then for every n there is a set of size n that is shattered by F, which implies that π F (n) = 2 n . This provides an interesting dichotomy: if π F (n) is not bounded by a polynomial, it must be equal to the function 2 n . As useful as the Sauer-Shelah Lemma is, the upper bound on asymptotics of the growth function implied by it is quite weak for many natural set systems. Therefore, we will study the following quantity. De nition 5. The VC density of a set system F is the quantity Observe that the de nition of the VC density of F makes little sense when the universe of F is nite, as then the growth function ultimately becomes 0, allowing a polynomial bound of arbitrary small degree. Therefore, we extend the de nition of VC density to classes of nite set systems (i.e., families of nite set systems) as follows: the VC density of a class C is the in mum over all α ∈ R + for which there is c ∈ R such that π F (n) c · n α for all F ∈ C and n ∈ N. Note that this is equivalent to measuring the VC density of the set system obtained by taking the union of all set systems from C on disjoint universes. Similarly, the VC dimension of a class of set systems C is the supremum of the VC dimensions of the members of C. Thus, informally speaking the VC density of F is the lowest possible degree of a polynomial bound that ts the conclusion of the Sauer-Shelah lemma for F. Clearly, the Sauer-Shelah lemma implies that the VC density is never larger than the VC dimension, but as it turns out, that connection goes both ways: cn d for all n ∈ N has VC dimension bounded by 4d log(cd). Hence, a set system F has nite VC dimension if and only if it has nite VC density, but the results showing their equivalence usually produce relatively weak bounds. As discussed in the introduction, VC density is often a ner measure of complexity than VC dimension for interesting problems. Set systems de nable in logic We assume basic familiarity with relational structures. The domain (or universe) of a relational structure A will be denoted by dom(A). For a tuple of variablesx and a subset S ⊆ dom(A), by Sx we denote the set of all evaluations ofx in S, that is, functions mapping the variables ofx to elements of S. A class of structures is a set of relational structures over the same signature. Consider a logic L over some relational signature Σ. A partitioned formula is an L-formula of the form ϕ(x,ȳ), where the free variables are partitioned into object variablesx and parameter variablesȳ. Then for a Σ-structure A, we can de ne the set system of ϕ-de nable sets in A: If C is a class of Σ-structures, then we de ne the class of set systems S ϕ (C) : Note that the universe of S ϕ (A) is dom(A)x, so the elements of S ϕ (A) can be interpreted as tuples of elements of A of length |x|. When measuring the VC parameters of set systems S ϕ (A) it will be convenient to somehow still regard dom(A) as the universe. Hence, we introduce the following de nition: a k-tuple set system is a pair (U, S), where U is a universe and S is a family of sets of k-tuples of elements of U. Thus, S ϕ (A) can be regarded as an |x|-tuple set system with universe dom(A). When F = (U, S) is a k-tuple set system, for a subset of elements X ⊆ U we de ne This naturally gives us the de nition of a restriction: F[X] := (X, S ∩ X). We may now lift all the relevant de nitions -of shattering, of the VC dimension, of the growth function, and of the VC density -to k-tuple set systems using only such restrictions: to subsets X ⊆ U. Note that these notions for k-tuple set systems are actually di erent from the corresponding regular notions, which would consider F as a set system with universe U k . This is because, for instance for the VC dimension, in the regular de nition we would consider shattering all possible subsets of k-tuples of the universe, while in the de nition for k-tuple set systems we restrict attention to shattering sets of the form X k , where X ⊆ U. MSO and transductions Recall that Monadic Second Order logic (MSO) is an extension of the First Order logic (FO) that additionally allows quanti cation over subsets of the domain (i.e. unary predicates), represented as monadic variables. Sometimes we will also allow modular predicates of the form |X| ≡ a mod p, where X is a monadic variable and a, p are integers, in which case the corresponding logic shall be named CMSO. If only parity predicates may be used (i.e. p = 2), we will speak about C 2 MSO logic. The main idea behind the proofs presented in the next sections is that we will analyze how complicated set systems one can de ne in MSO on speci c simple structures: trees and grid graphs. Then these results will be lifted to more general classes of graphs by means of logical transductions. For a logic L (usually a variant of MSO) and a signature Σ, by L[Σ] we denote the logic comprising all L-formulas over Σ. Then deterministic L-transductions are de ned as follows. De nition 7. Fix two relational signatures The semantics we associate with this de nition is as follows. Let A be a Σ structure and D = {u : u ∈ dom(B), B |= γ(u)}. Then I(A) is a Σ structure given by: In a nutshell, we restrict the universe of the input structure to the elements satisfying γ(x), and in this new domain we reinterpret the relations of Σ using L[Σ]-formulas evaluated in A. We will sometimes work with non-deterministic transductions, which are the following generalization. De nition 8. Fix two relational signatures Σ and Σ . A non-deterministic L-transduction I from Σstructures to Σ -structures is a pair consisting of: a nite signature Γ(I) consisting entirely of unary relation symbols, which is disjoint from Σ ∪ Σ ; and a deterministic L-transduction I from Σ ∪ Γ(I)-structures to Σ -structures. Transduction I is called the deterministic part of I. We associate the following semantics with this de nition. If A is a Σ-structure, then by A Γ(I) we denote the set of all possible Σ ∪ Γ(I)-structures obtained by adding valuations of the unary predicates from Γ(I) to A. Then we de ne I(A) := I (A Γ(I) ), which is again a set of structures. Thus, a non-deterministic transduction I can be seen as a procedure that rst non-deterministically selects the valuation of the unary predicates from Γ(I) in the input structure, and then applies the deterministic part. If C is a class of Σ-structures and I is a transduction (deterministic or not), then by I(C) we denote the sum of images of I over elements of C. Also, if Γ is a signature consisting of unary relation names that is disjoint from Σ, then we write C Γ := {A Γ : A ∈ C} for the class of all possible Σ ∪ Γ-structures that can be obtained from the structures from C by adding valuations of the unary predicates from Γ. An important property of deterministic transductions is that MSO formulas working over the output structure can be "pulled back" to MSO formulas working over the input structure that select exactly the same tuples. All one needs to do is add guards for all variables, ensuring that the only entities we operate on are those accepted by γ(x), and replace all relational symbols of Σ with their respective formulas which de ne the transduction. This translation is formally encapsulated in the following result. The formula ψ provided by Lemma 9 will be denoted by I −1 (ϕ). Finally, we remark that in the literature there is a wide variety of di erent notions of logical transductions and interpretations; we chose one of the simplest, as it will be su cient for our needs. We refer a curious reader to a survey of Courcelle [2]. MSO on graphs We will work with two variants of MSO on graphs: MSO 1 and MSO 2 . Both these variants are de ned as the standard notion of MSO logic, but applied to two di erent encodings of graphs as relational structures. When we talk about MSO 1 -formulas, we mean MSO-formulas over structures representing graphs as follows: elements of the structure correspond to vertices and there is a single binary relation representing adjacency. The second variant, MSO 2 , encompasses MSO-formulas over structures representing graphs as follows: the domain contains both edges and vertices of the graph, and there is a binary incidence relation that selects all pairs (e, u) such that e is an edge and u is one of its endpoints. These two encodings of graphs will be called the adjacency encoding and the incidence encoding, respectively. Thus, practically speaking, in MSO 1 we may only quantify over subsets of vertices, while in MSO 2 we allow quanti cation both over subsets of vertices and over subsets of edges. MSO 2 is strictly more powerful than MSO 1 , for instance it can express that a graph is Hamiltonian. We may extend MSO 1 and MSO 2 with modular predicates in the natural way, thus obtaining logic CMSO 1 , C 2 MSO 1 , etc. If G is a graph and ϕ(x,ȳ) is an L-formula over graphs, where L is any of the variants of MSO discussed above, then we may de ne the |x|-tuple set system S ϕ (G) as before, where the universe of S ϕ (G) is the vertex set of G. We remark that in case of MSO 2 , despite the fact that formally an MSO 2 -formula works over a universe consisting of both vertices and edges, in the de nition of S ϕ (G) we consider only the vertex set V as the universe. That is, the parameter variablesȳ range over V and each evaluationv ∈ Vȳ de nes the set of evaluationsū ∈ Vx satisfying G |= ϕ(ū,v) which is included in S ϕ (G). MSO and tree automata When proving upper bounds we will use the classic connection between MSO and tree automata. Throughout this paper, all trees will be nite, rooted, and binary: every node may have a left child and a right child, though one or both of them may be missing. Trees will be represented as relational structures where the domain consists of the nodes and there are two binary relations, respectively encoding being a left child and a right child. In case of labeled trees, the signature is extended with a unary predicate for each label. De nition 10. Let Σ be a nite alphabet. A (deterministic) tree automaton is a tuple (Q, F, δ) where Q is a nite set of states, F is a subset of Q denoting the accepting states, while δ : (Q ∪ {⊥}) 2 × Σ → Q is the transition function. A run of a tree automaton A = (Q, F, δ) over a Σ-labeled tree T is the labeling of its nodes ρ : V (T ) → Q which is computed in a bottom-up manner using the transition function. That is, if a node v bears symbol a ∈ Σ and the states assigned by the run to the children of v are q 1 and q 2 , respectively, then the state assigned to v is δ(q 1 , q 2 , a). In case x has no left or right child, the corresponding state q t is replaced with the special symbol ⊥. In particular, the state in every leaf is determined as δ(⊥, ⊥, a), where a ∈ Σ is the label of the leaf. We say that a tree automaton A accepts a nite tree T if ρ(root(T )) ∈ F . The following statement expresses the classic equivalence of CMSO and nite automata over trees. Lemma 11 ([13] ). For every CMSO sentence ϕ over the signature of Σ-labeled trees there exists a tree automaton A ϕ which is equivalent to ϕ in the following sense: for every Σ-labeled tree T , T |= ϕ if and only if A ϕ accepts T . Since we are actually interested in formulas with free variables and not only sentences, we will need to change this de nition slightly. Informally speaking, we will enlarge the alphabet in a way which allows us to encode valuations of the free variables. Let T be a Σ-labelled tree and consider a tuple of variables x along with its valuationū ∈ V (T )x. Then we can encodeū in T by de ning the augmented tree Tā as follows: Tā is the tree with labels from Σ × {0, 1}x that is obtained from T by enriching the label of every node v with the function f v ∈ {0, 1}x de ned as follows: for x ∈x, we have f v (x) = 1 if and only if v =ū(x). As observed by Grohe and Turán [9], CMSO formulas can be translated to equivalent tree automata working over augmented trees. Upper bounds In this section we prove Theorem 1. We start with investigating the case of CMSO-de nable set systems in trees. This case will be later translated to the case of classes with bounded treewidth or cliquewidth by means of CMSO-transductions. Trees Recall that labelled binary trees are represented as structures with domains containing their nodes, two successor relations-one for the left child, and one for the right-and unary predicates for labels. It turns out that CMSO-de nable set systems over labelled trees actually admit optimal upper bounds for VC density. This improves the result of Grohe and Turán [9] showing that such set systems have bounded VC dimension. Theorem 13. Let C be a class of nite binary trees with labels from a nite alphabet Σ, and ϕ(x,ȳ) be a partitioned CMSO-formula over the signature of Σ-labeled binary trees. Then there is a constant c ∈ N such that for every tree T ∈ C and a non-empty subset of its nodes A, we have . By Lemma 12, ϕ(x,ȳ) is equivalent to a tree automaton A = (Q, F, δ) over an alphabet of Σ × {0, 1}x × {0, 1}ȳ. We will now investigate how the choice of parametersȳ can a ect the runs of A over T . Since we are really considering T over the alphabet extended with binary markers forx andȳ, we will use T to denote the extension of the labeling of T where all binary markers are set to 0. That is, T is the tree labeled with alphabet Σ × {0, 1}x × {0, 1}ȳ obtained from T by extending each symbol appearing in T with functions that map all variables ofx andȳ to 0. Tree T q is de ned analogously, where the markers forȳ are set according to the valuationq, while the markers forx are all set to 0. In T we have natural ancestor and descendant relations; we consider every node its own ancestor and descendant as well. Let B be the subset of nodes of T that consists of: • the root of T ; • all nodes of A; and • all nodes u / ∈ A such that both the left child and right child of u have a descendant that belongs to A. Note that |B| 1 + |A| + (|A| − 1) = 2|A|. For convenience, let φ : V (T ) → B be a function that maps every node u of T to the least ancestor of u that belongs to B. We de ne a tree T with B as the set of nodes as follows. A node v ∈ B is the left child of a node u ∈ B in T if the following holds in T : v is a descendant of the left child of u and no internal vertex on the unique path from u to v belongs to B. Note that every node u ∈ B has at most one left child in T , for if it had two left children v, v , then the least common ancestor of v and v would belong to B and would be an internal vertex on both the u-to-v path and the u-to-v path. The right child relation in T is de ned analogously. The reader may think of T as of T with φ −1 (u) contracted to u, for every u ∈ B; see Figure 1. Note that we did not de ne any labeling on the tree T . Indeed, we treat T as an unlabeled tree, but will consider di erent labelings of T induced by various augmentations of T . For this, we de ne alphabet where X → Y denotes the set of functions from X to Y . Now, for a xed valuation of parameter variables q ∈ V (T )ȳ and object variablesp ∈ V (T )x, we de ne the ∆-labeled tree T q as follows. Consider any node u ∈ B and let Tpq[u] be the context of u: a tree obtained from Tpq by restricting it to the descendants of u, and, for every child v of u in T , replacing the subtree rooted at v by a single special node called a hole. where it is assumed that on the input we are given the states to which the children of u in T are evaluated. Note that the domain of δ pq [u] consists of pairs of states if u has two children in T , of one state if u has one child in T , and of zero states if u has no children in T . Thus Note that for xedq and u, δ pq [u] is uniquely determined by the subset of variables ofx thatp maps to u. This is becausep ∈ Ax, while u is the only node of φ −1 (u) that may belong to A. Hence, with u we can associate a function f u ∈ ∆ that givent ∈ {0, 1}x, outputs the transformation δ pq [u] for any (equivalently, every)p ∈ Ax satisfyingt(x) = 1 i p(x) = u, for all x ∈x. Then we de ne the ∆-labeled tree T q as T with labeling u → f u . Note that the above construction can be applied toq = in the same way. Now, forp ∈ Ax ∪ { } we de ne the ∆ × {0, 1}x-labeled tree (T q )p by augmenting T q with markers for the valuationp; note that this is possible because A is contained in the node set of T . We also de ne an automaton A working on ∆ × {0, 1}x-labeled trees as follows. A uses the same state set as A, while its transition function is de ned by taking the binary valuation forx in a given node u, applying it to the ∆-label of u to obtain a state transformation, verifying that the arity of this transformation matches the number of children of u, and nally applying that transformation to the input states. Then the following claim follows immediately from the construction. From Claim 1 it follows that if for two tuplesq,q we have T q = T q , then for everyp ∈ Ax, A accepts Tpq if and only if A accepts Tpq . As A is equivalent to the formula ϕ(x,ȳ) in the sense of Lemma 12, this implies that {p ∈ Ax : T |= ϕ(p,q)} = {p ∈ Ax : T |= ϕ(p,q )}. In other words,q andq de ne the same element of S ϕ (T ) [A]. We conclude that the cardinality of S ϕ (T ) [A] is bounded by the number of di erent trees T q that one can obtain by choosing di erentq ∈ V (T )ȳ. Observe that for eachq ∈ V (T )ȳ, tree T q di ers from T by changing the labels of at most |ȳ| nodes. Indeed, from the construction of T q it follows that for each u ∈ B, the labels of u in T q and in T may di er only ifq maps some variable ofȳ to a node belonging to φ −1 (u); this can happen for at most |ȳ| nodes of B. Recalling that |B| 2|A| and |∆| |Q| 2 |x| ·(|Q| 2 +|Q|+1) , the number of di erent trees T q is bounded by where c := 2 |ȳ| · (|ȳ| + 1) · |Q| 2 |x| ·(|Q| 2 +|Q|+1) |ȳ| . As argued, this number is also an upper bound on the cardinality of S ϕ (T ) [A], which concludes the proof. Classes with bounded treewidth or cliquewidth We now exploit the known connections between trees and graphs of bounded treewidth or cliquewidth, expressed in terms of the existence of suitable MSO-transductions, to lift Theorem 13 to more general classes of graphs, thereby proving Theorem 1. In fact, we will not rely on the original combinatorial de nitions of these parameters, but on their logical characterizations proved in subsequent works. The rst parameter of interest is the cliquewidth of a graph, introduced by Courcelle and Olariu [6]. We will use the following well-known logical characterization of cliquewidth. Theorem 14 ( [5,8]). For every k ∈ N there is a nite alphabet Σ k and a deterministic MSO-transduction I k such that for every graph G of cliquewidth at most k there exists a Σ k -labeled binary tree T satisfying the following: I k (T ) is the adjacency encoding of G. Thus, one may think of graphs of bounded cliquewidth as of graphs that are MSO-interpretable in labeled trees. By combining Theorem 14 with Theorem 13 we can prove part (i) of Theorem 1 as follows. Fix a class C with uniformly bounded cliquewidth and a partitioned CMSO-formula ϕ(x,ȳ) over the signature of C. Let k be the upper bound on the cliquewidth of graphs from C, and let Σ k and I k be the alphabet and the deterministic MSO-transduction provided by Theorem 14 for k. Then for every G ∈ C, we can nd a Σ k -labeled tree T such that I k (T ) is the adjacency encoding of G. Note that V (G) ⊆ V (T ). Observe that for every and vertex subset where I −1 k (ϕ) is the formula ϕ pulled back through the transduction I k , as given by Lemma 9. As by Theorem 13 we have |S I −1 k (ϕ) (T )[A]| c · |A| |ȳ| for some constant c, the same upper bound can be also concluded for the cardinality of S ϕ (G) [A]. This proves Theorem 1, part (i). To transfer these result to the case of CMSO 2 over graphs of bounded treewidth, we need to de ne an additional graph transformation. For a graph G, the incidence graph of G is the bipartite graph with V (G) ∪ E(G) as the vertex set, where a vertex u is adjacent to an edge e if and only if u is an endpoint of e. The following result links CMSO 2 on a graph with CMSO 1 on its incidence graph. Lemma 15 ([3,4]). Let G be a graph of treewidth k. Then the cliquewidth of the incidence graph of G is at most k + 3. Moreover, with any CMSO 2 -formula ϕ(x) one can associate a CMSO 1 -formula ψ(x) such that for any graph H andā ∈ V (H)x we have H |= ϕ(ā) if and only if H |= ψ(ā), where H is the incidence graph of H. Now Lemma 15 immediately reduces part (ii) of Theorem 1 to part (i). Indeed, for every partitioned CMSO 2 -formula ϕ(x,ȳ), the corresponding CMSO 1 -formula ψ(x,ȳ) provided by Lemma 15 satis es the following: for every graph H and its incidence graph H , we have Observe that by Lemma 15, if a graph class C has uniformly bounded treewidth, then the class C comprising the incidence graphs of graphs from C has uniformly bounded cliquewidth. Hence we can apply part Lower bounds We now turn to proving Theorem 2. As in the work of Grohe and Turán [9], the main idea is to show that the structures responsible for unbounded VC dimension of MSO-de nable set systems are grids. That is, the rst step is to prove a suitable unboundedness result for the class of grids, which was done explicitly by Grohe and Turán in [9,Example 19]. Second, if the considered graph class C has unbounded treewidth (resp., cliquewidth), then we give a deterministic MSO 2 -transduction (resp. C 2 MSO 1 -transduction) from C to the class of grids. Such transductions are present in the literature and follow from known forbidden-structures theorems for treewidth and cliquewidth. Then we can combine these two steps into the proof of Theorem 2 using the following generic statement. In the following, we shall say that logic L has unbounded VC dimension on a class of structures C if there exists a partitioned L-formula ϕ(x,ȳ) over the signature of C such that the class of set systems S ϕ (C) has in nite VC dimension. P . Let formula ψ(x,ȳ) witness that L has unbounded VC dimension on D. Then it is easy to see that the formula ϕ := I −1 (ψ), provided by Lemma 9, witnesses that L has unbounded VC dimension on C. Grohe and Turán proved the following. Grids Theorem 17 (Example 19 in [9]). MSO has unbounded VC dimension on the class of grids. The proof of Theorem 17 roughly goes as follows. The key idea is that for a given set of elements X it is easy to verify in MSO the following property: (i, j) ∈ X is true if and only if the ith bit of the binary encoding of j is 1. This can be done on the row-by-row basis, by expressing that elements of X in every row encode, in binary, a number that is one larger than what the elements of X encoded in the previous row. Using this observation, one can easily write a formula ϕ(x, y) that selects exactly pairs of the form ((i, 0), (0, j)) such that (i, j) ∈ X. Then ϕ(x, y) shatters the set {(i, 0) : 1 i log n }, as the binary encodings of numbers from 1 to n give all possible bit vectors of length log n when restricted to the rst log n bits. Consequently, ϕ(x, y) shatters a set of size log n in an n × n grid, which enables us to deduce the following slight strengthening of Theorem 17: MSO has unbounded VC dimension on any class of structures that contains in nitely many di erent grids. For the purpose of using existing results from the literature, it will be convenient to work with grid graphs instead of grids. An n × n grid graph is a graph on vertex set [n] × [n] where two vertices (i, j) and (i , j ) are adjacent if and only if |i − i | + |j − j | = 1. When speaking about grid graphs, we assume the adjacency encoding as relational structures. Thus, the di erence between grid graphs and grids is that the former are only equipped with a symmetric adjacency relation without distinguishement of directions, while in the latter we may use (oriented) successor relations, di erent for both directions. Fortunately, grid graphs can be reduced to grids using a well-known construction, as explained next. Lemma 18. There exists a non-deterministic MSO transduction J from the adjacency encodings of graphs to grids such that for every class of graphs C that contains arbitrarily large grid graphs, the class J(C) contains arbitrarily large grids. P . The transduction uses six additional unary predicates, that is, Γ(J) = {A 0 , A 1 , A 2 , B 0 , B 1 , B 2 }. We explain how the transduction works on grid graphs, which gives rise to a formal de nition of the transduction in a straightforward way. Given an n × n grid graph G, the transduction non-deterministically chooses the valuation of the predicates of Γ(J) as follows: for t ∈ {0, 1, 2}, A t selects all vertices (i, j) such that i ≡ t mod 3 and B t selects all vertices (i, j) such that j ≡ t mod 3. Then the horizontal successor relation H(·, ·) can be interpreted as follows: H(u, v) holds if and only if u and v are adjacent in G, u and v are both selected by B s for some s ∈ {0, 1, 2}, and there is t ∈ {0, 1, 2} such that u is selected by A t while v is selected by A t+1 mod 3 . The vertical successor relation is interpreted analogously. It is easy to see that if G is an n × n grid graph and the valuation of the predicates of Γ(J) is selected as above, then J indeed outputs an n × n grid. This implies that if C contains in nitely many di erent grid graphs, then J(C) contains in nitely many di erent grids. We may now combine Lemma 18 with Theorem 17 to show the following. Lemma 19. Suppose L ∈ {MSO, C 2 MSO, CMSO} and C is a class of structures such that there exists a non-deterministic L-transduction I from C to adjacency encodings of graphs such that I(C) contains in nitely many di erent grid graphs. Then there exists a nite signature Γ consisting only of unary relation names such that L has unbounded VC dimension on C Γ . P . As non-deterministic transductions are closed under composition for all the three considered variants of logic (see e.g. [2]), from Lemma 18 we infer that there exists a non-deterministic L-transduction K such that K(C) contains in nitely many di erent grids. By de nition, transduction K has its deterministic part K such that K(C) = K (C Γ(K) ). It now remains to take Γ := Γ(K) and use Lemma 16 together with Theorem 17 (and the remark after it). Classes with unbounded treewidth and cliquewidth For part (ii) of Theorem 2 we will use the following standard proposition, which essentially dates back to the work of Seese [16]. Lemma 20. There exists a non-deterministic MSO-transduction I from incidence encodings of graphs to adjacency encodings of graphs such that for every graph class C whose treewidth is not uniformly bounded, the class I(C) contains all grid graphs. P . Recall that a minor model of a graph H in a graph G is a mapping φ from V (H) to connected subgraphs of G such that subgraphs {φ(u) : u ∈ V (H)} are pairwise disjoint, and for every edge uv ∈ E(H) there is an edge in G with one endpoint in φ(u) and the other in φ(v). Then G contains H as a minor if there is a minor model of H in G. By the Excluded Grid Minor Theorem [14], if a class of graphs C has unbounded treewidth, then every grid graph is a minor of some graph from C. Therefore, it su ces to give a non-deterministic MSO-transduction I from incidence encodings of graphs to adjacency encodings of graphs such that for every graph G, I(G) contains all minors of G. The transduction I works as follows. Suppose G is a given graph and φ is a minor model of some graph H in G. First, in G we non-deterministically guess three subsets: • a subset D of vertices, containing one arbitrary vertex from each subgraph of {φ(u) : u ∈ V (H)}; • a subset F of edges, consisting of the union of spanning trees of subgraphs {φ(u) : u ∈ V (H)} (where each spanning tree is chosen arbitrarily); • a subset L of edges, consisting of one edge connecting a vertex of φ(u) and a vertex of φ(v) for each edge uv ∈ E(H), chosen arbitrarily. Recall that graph G is given by its incidence encoding, hence these subsets can be guessed using three unary predicates in Γ(I). Now with sets D, F, L in place, the adjacency encoding of the minor H can be interpreted as follows: the vertex set of H is D, while two vertices u, u ∈ D are adjacent in H if and only if in G they can be connected by a path that traverses only edges of F and one edge of L. It is straightforward to express this condition in MSO 2 . Observe that part (ii) of Theorem 2 follows immediately by combining Lemma 20 with Lemma 19. Indeed, from this combination we obtain a partitioned MSO-formula ϕ(x,ȳ) and a nite signature Γ consisting of unary relation names such that the class of set systems S ϕ (C Γ ) has in nite VC dimension. Here, we treat C as the class of incidence encodings of graphs from C. Now if we take the label set Λ to be the powerset of Γ, we can naturally modify ϕ(x,ȳ) to an equivalent formula ϕ (x,ȳ) working over Λ-ve-labelled graphs, where the Λ-label of every vertex u encodes the subset of predicates of Γ that select u. Thus S ϕ (C Λ,2 ) has in nite VC dimension, which concludes the proof of part (ii) of Theorem 2. To prove part (i) of Theorem 2 we apply exactly the same reasoning, but with Lemma 20 replaced with the following result of Courcelle and Oum [7]. Lemma 21 (Corollary 7.5 of [7]). There exists a C 2 MSO-transduction I from adjacency encodings of graphs to adjacency encodings of graphs such that if C is a class of graphs of unbounded cliquewidth, then I(C) contains arbitrarily large grid graphs.
Lie conformal superalgebras and duality of modules over linearly compact Lie superalgebras We construct a duality functor on the category of continuous representations of linearly compact Lie superalgebras, using representation theory of Lie conformal superalgebras. We compute the dual representations of the generalized Verma modules. Introduction Lie conformal superalgebras encode the singular part of the operator product expansion (OPE) of chiral fields in the vacuum sector of conformal field theory [15]: They play an important role in the theory of vertex algebras that encode the full OPE (1), so that the full structure of a vertex algebra is captured by the λ-bracket of the Lie conformal (super)algebra structure: (2) [a(w) λ b(w)] = j≥0 (a(w) (j) b(w)) λ j j! , and the normally ordered product a(w) (−1) b(w) (since a(w) (−n−1) b(w) = 1 n! (∂ n w a(w) (−1) b(w))). In the classical limit the normally ordered product of a vertex algebra becomes commutative, but the λ-bracket still satisfies the axioms of a Lie conformal superalgebra. This leads to the theory of Poisson vertex algebras that play a fundamental role in the theory of Hamiltonian PDE. Recall that λ-bracket (2) satisfies the following axioms, where a = a(w), ∂a = ∂ w a(w): -module R, endowed with a λ-bracket R ⊗ R → R[λ], satisfying the above three axioms, is called a Lie conformal superalgebra. Here and in the sequel we denote by F an algebraically closed field of characteristic 0. All the work on representation theory of Lie conformal superalgebras R was based on the simple observation that representations of R are closely related to "continuous" representations of the associated to R annihilation Lie superalgebra. Recall that the annihilation Lie superalgebra is the vector superspace where t has even parity, with the (well-defined) continuous bracket (4) [at m , bt n ] = j≥0 m j (a (j) b)t m+n−j , which makes it a linearly compact Lie superalgebra. Since ∂ commutes with ∂ t , it extends in a natural way to a derivation ∂ of the Lie superalgebra A(R), hence one can define the extended annihilation Lie superalgebra A e (R) = F[∂] ⋉ A(R). It is easy to see that a "conformal" R-module M is the same as a continuous A e (R)-module [10]. Note that if a Lie conformal superalgebra R is a finitely generated F[∂]-module, then A(R) is a linearly compact Lie superalgebra, hence representation theory of Lie conformal superalgebras is intimately related to the theory of continuous representations of linearly compact Lie superalgebras. Although it is unclear what is a right definition of a vertex algebra in several indeterminates, the definition of a Lie conformal superalgebra and all the above remarks can be easily extended to the case when one even indeterminate λ is replaced by several even commuting indeterminates λ 1 , . . . , λ r . In the paper we allow also for s odd indeterminates λ r+1 , . . . , λ r+s and we say that the corresponding Lie conformal superalgebra is of type (r, s), but for the sake of simplicity this will not be discussed in the introduction. In the present paper we reverse the point of view: instead of using continuous representations of linearly compact Lie superalgebras in the study of finitely generated F[∂]-modules over Lie conformal superalgebras, we use the latter to study the former. But then a natural question arizes: which linearly compact Lie superalgebras are annihilation algebras of Lie conformal superalgebras? The answer probably is: in all interesting examples. Namely, if a linearly compact Lie superalgebra is of geometric origin, i.e., it is constructed with the use of vector fields and differential forms in a formal neighborhood of a point in an (r|s)-dimensional supermanifold, then it is an annihilation superalgebra of a finitely generated as a F[∂ 1 , . . . , ∂ r ]-module Lie conformal superalgebra in r indeterminates. Let us demonstrate this on the example of the Lie algebra W (r) of continuous derivations of the algebra of formal power series F[[t 1 , . . . , t r ]]. Include the Lie algebra W (r) in a larger Lie algebra W (r) of continuous derivations of the algebra of formal Laurent series It is immediate to see, using the standard properties of the formal δ-function δ(z − w) = n∈Z z n w n−1 , that [a i (z), a j (w)] = ∂ w i a j (w)δ(z − w) + a i (w)∂ w j δ(z − w) + a j (w)δ w i δ(z − w), where z = (z 1 , . . . , z r ), w = (w 1 , . . . , w r ), and δ(z − w) = r i=1 δ(z i − w i ). Applying the formal Fourier transform, i.e., letting we obtain a structure of a Lie conformal algebra, which we denote by RW (r), on the free F[∂ z 1 , . . . , ∂ zr ]-module of rank r, generated by the elements a i = a i (z), with the following λ-brackets: [a iλ a j ] = ∂ i a j + λ j a i + λ i a j , i, j = 1, . . . , r. It is easy to see that the linearly compact Lie algebra W (r) is the annihilation algebra of the Lie conformal algebra RW (r). A remarkable feature of representation theory of a Lie conformal superalgebra R is the existence of a contravariant duality functor on the category of R-modules which are finitely generated as F[∂]-modules [15,2,5]. Extension of this construction to the case of a Lie conformal superalgebra R in several indeterminates is straightforward. Due to the above remarks, this duality functor can be transported to the category of continuous representations of the linearly compact Lie superalgebra, which is the annihilation algebra of R. In the present paper we study the duality functor for the category P of continuous Zgraded modules with discrete topology over Z-graded linearly compact Lie superalgebras g = j∈Z ≥−d g j , where the depth d ≥ 1, dim g j < ∞, and [g i , g j ] ⊂ g i+j . We have: g = g <0 + g ≥0 where g <0 = j<0 g j and g ≥0 = j≥0 g j . We also require that modules from P are finitely generated as U (g − )-modules. Recall that a g-module M is continuous if, for any v ∈ M , g n v = 0 for n sufficiently large. The category P is similar to the BGG category O, and, as in category O, the most important objects in P are generalized Verma modules M (F ) (see, e.g. [16]). Recall, that, given a finitedimensional g 0 -module F , one extends it trivially to the subalgebra g >0 = j>0 g j ⊂ g, and defines M (F ) = Ind g g ≥0 F. Our main result is the computation of the dual to M (F ) g-module M (F ) * (see Theorems 3.15 and 3.17). It turns out that M (F ) * is not M (F * ), but M (F ∨ ), where F ∨ is a shifted g 0 -module F * by the following character (=1-dimensional representation) χ of g 0 : (5) χ(a) = str(ad a| g <0 ). In particular, if g 0 = [g 0 , g 0 ], then χ = 0, and M (F ) * = M (F * ), which happens, for example, for the principally graded exceptional Lie superalgebra E(5, 10) [14]. This observation has been made in [6], which led us to the present paper. One of the main problems of representation theory of a linearly compact Lie superalgebra g is the classification of degenerate (i.e. non-irreducible) generalized Verma modules M (F ), associated to finite-dimensional irreducible g 0 -modules F . Since the topological dual of M (F ) endowed with discrete topology is a linearly compact g-module, a solution of the above problem is important for the description of irreducible linearly compact g-modules. In order to apply these results to representation theory of simple, finite rank Lie conformal superalgebras of type (r, s), one needs to develop representation theory of the corresponding annihilation algebra, which apart from the "current" case, is a central extension of an infinitedimensional simple linearly compact Lie superalgebra. The simple infinite-dimensional linearly compact Lie superalgebras were classified and explicitly described, along with their maximal open subalgebras, in [14,11,8]. It was shown in [9] and [13] that all linearly compact simple Lie superalgebras of growth 1 (rather their universal central extensions) are annihilation superalgebras of simple Lie conformal superalgebras of type (1, 0). Using them, all finite rank simple Lie conformal superalgebras of type (1, 0) were classified in [13]. A complete list of those, admitting a non-trivial Z-gradation, consists of three series W N , S N , and K N , and two exceptions: K ′ 4 and CK 6 . Representation theory of W N and S N was constructed in [5], of K N with N = 0, 1 in [10], resp. with N = 2, 3, 4 in [12], resp. with all N ≥ 0 in [3]. Representation theory of CK 6 and K ′ 4 was constructed in [4] and [1], respectively. A very interesting feature of these works is that all degenerate modules are members of infinite complexes, for classical (resp. exceptional) Lie conformal superalgebras the number of these complexes being one or two (resp. infinite). A complete representation theory of linearly compact Lie superalgebras, corresponding to simple Lie conformal superalgebras of type (r, s) with r > 1 is known only for Cartan type Lie algebras, beginning with the paper [20], and for the exceptional Lie superalgebra E(3, 6) [16,17,19]. Some partial results in other cases are obtained in [18] and [21]. Note that again for all known examples the degenerate modules can be organized in infinite complexes, the number of them being finite (resp. infinite) in the classical (resp. exceptional) cases. We hope that the duality, established in the present paper, and the Lie conformal superalgebra approach will help to make progress in representation theory in the remaining cases, especially E(5, 10). The contents of the paper are as follows. After the introduction we discuss in Section 2 the notion of Lie conformal superalgebra of type (r, s), its annihilation Lie superalgebra, and elements of their representation theory. In Section 3 we introduce the duality functor in the category of finitely-generated modules over a Lie conformal superalgebra of type (r, s) and of the corresponding annihilation Lie superalgebra. We prove here the main Theorem 3.17 under Assumptions 3.3. We conjecture that if g is a linearly compact Lie superalgebra then for any transitive pair (g, g ≥0 ), i.e., such that g ≥0 is an open subalgebra of g containing no non-trivial ideal of g, one can construct a duality functor for which Theorem 3.17 still holds with χ(a) = str(ad a| g/g ≥0 ) for a ∈ g ≥0 . In the remaining Sections 4-8 we show that the linearly compact Lie superalgebras g = W (r, s), K(1, n), E(5, 10), E (3,6), and E (3,8) are annihilation Lie superalgebras of certain Lie conformal superalgebras Rg of type (r, s) (for suitable r and s) which we describe explicitely. We check that in all these cases Assumptions 3.3 on g with its principal gradation are satisfied. We also check in Section 4 that for all annihilation superalgebras, associated to the ordinary Lie conformal superalgebras (i.e., of type (1, 0)) Theorem 3.17 is applicable as well. Unfortunately we do not know whether this is the case for the remaining exceptional simple Lie superalgebra g = E(4, 4), though it is not difficult to construct the corresponding Rg. If R is a Z/2Z-graded vector space we give to [λ] ⊗ R the structure of a Z/2Z-graded [λ]-bimodule by letting λ i (P (λ) ⊗ a) = λ i P (λ) ⊗ a and (P (λ) ⊗ a)λ i = (−1) p i p(a) P (λ)λ i ⊗ a, where p(a) ∈ Z/2Z denotes the parity of a. We will usually drop the tensor product symbol and simply write P (λ)a instead of P (λ) ⊗ a. Definition 2.1. A Lie conformal superalgebra of type (r, s) is a Z/2Z graded [∂]-bimodule R such that a∂ i = (−1) p i p(a) ∂ i a for all a ∈ R and i ∈ {1, . . . , r + s}, endowed with a λ-bracket, i.e. a Z/2Z-graded linear map R ⊗ R → [λ] ⊗ R, denoted by a ⊗ b → [a λ b], that satisfies the following properties: We refer to Properties (6) and (7) as the conformal sesquilinearity, Property (8) as the conformal skew-symmetry and Property (9) as the conformal Jacobi identity. We note that the notion of a Lie conformal superalgebra, as treated in [15], corresponds to a Lie conformal superalgebra of type (1, 0). For the convenience of the reader we first briefly present the theory of Lie conformal superalgebras of type (r, 0): in this case all the results are straightforward extensions of those in type (1, 0) and therefore are stated without proofs. We then develop the general theory in type (r, s). If K = (k 1 , . . . , k r ) is any r-tuple of non-negative integers we let λ K = λ k 1 1 λ k 2 2 · · · λ kr r and K! = k 1 ! · · · k r !. For a, b ∈ R the K-products (a K b) are defined by the polynomial expansion (10) [ Starting from a Lie conformal superalgebra R of type (r, 0), one can construct a new Lie conformal superalgebraR of the same type, called the affinization of R. Note that in this expression it is meant that the derivatives with respect to the variables y i in the bracket [a λ+∂y b] act only on y M . The corresponding K-products are: The λ-bracket (11) defines onR a Lie conformal superalgebra structure with∂ = ∂ + ∂ y . The annihilation algebra associated with R is the Lie superalgebra with the bracket given by The representation theory of a Lie conformal superalgebra is closely related to the representation theory of the corresponding annihilation algebra. This fact relies on the following relation where The goal of this section is to extend these results to Lie conformal superalgebras of type (r, s). In this context, in order to simplify computations involving signs, it is more convenient to use expansion (14) below instead of (10). For this, introduce the following notation. If K = (k 1 , . . . , k t ) is any sequence with entries in {1, . . . , r + s} we let and we similarly define y K , x K and so on. If K = ∅, we let f (K) = 1, λ K = 1. We also let p K = p k 1 + · · · + p kt and so p(λ K ) = p K . For example, if r = 2, s = 3 and K = (2, 3, 2, 1, 5, 4), then f (K) = 2, λ K = −λ 1 λ 2 2 λ 3 λ 4 λ 5 and p K = 1. It is clear that if J and K are obtained from each other by a permutation of the entries we have λ J = ±λ K ; we write in this case J ∼ K and we denote by S r,s any set of representatives of these equivalent classes. For a, b ∈ R the K-products (a K b) are uniquely determined by the following conditions: Remark 2.2. Conditions (6) and (7) in Definition 2.1 can be restated in terms of K-products by means of the following equations, where, for K = (k 1 , . . . , k t ) and i ∈ {1, . . . , r + s}, we let iK = (i, k 1 , . . . , k t ): As in the completely even case, starting from a Lie conformal superalgebra R of type (r, s) one can construct a new Lie conformal superalgebraR of the same type, called the affinization of R. LetR = R ⊗ [[y]]. We considerR as a [[y]]-bimodule and also as a [∂ y ] = [∂ y 1 , . . . , ∂ y r+s ]-bimodule letting with λ-bracket given by The corresponding K-products are: where for K = (k 1 , . . . , k t ) and J = (j 1 , . . . , j u ) we let KJ = (k 1 , . . . , k t , j 1 , . . . , j u ). Proof. We first check condition (6) in Definition 2.1. Similarly one can check condition (7). Now we verify the conformal skew-symmetry, i.e. We have where the last equality holds due to the Leibniz rule. The verification of the conformal Jacobi identity is left to the reader. Proof. The proof is a straightforward generalization of the standard conformal case which is treated in [15]. One can also check that this is an immediate consequence of Properties (8) and (9) onR together with the observation that ∂ i + ∂ y i = 0 on A(R). Next target is to extend the fundamental identity (13) to Lie conformal superalgebras. The crucial point here is to give the appropriate definition of a λ ∈ [[λ]] ⊗ A(R): this is the main result in the next Proposition 2.8. We first give some technical lemmas. Lemma 2.6. Let R, J be two finite sequences with entries in {1, . . . , r + s}. Then where η J = (−1) 1+2+···+q J and q J is the number of odd entries in J. Proof. It easily follows by induction on the length of J. As in the completely even case the following result will turn out to be crucial in the representation theory of Lie conformal superalgebras (cf. [10]). Then Proof. We have by Lemma 2.7. Conformal modules and conformal duals This section is dedicated to the study of modules over Lie conformal superalgebras. Notice that if R is a Z-graded Lie conformal superalgebra of type (r, s) then its annihilation algebra A(R) inherits a Z-gradation by setting In what follows we assume the following technical conditions on a Lie conformal superalgebra R of type (r, s), which turn out to be satisfied in many interesting cases: we state them explicitly for future reference. (1) R is Z-graded; (2) The induced Z-gradation on A(R) has depth at most 3; (3) The homogeneous components A(R) −1 and A(R) −3 are purely odd; (4) The map ad : A(R) −2 → der(A(R)) is injective and its image is D = ∂ y 1 , . . . , ∂ y r+s . We consider the semi-direct sum of Lie superalgebras D ⋉ A(R), we observe that the subset We point out that the Lie superalgebra D ⋉ A(R) is the natural generalization of the socalled extended annihilation algebra introduced by Cheng and Kac in [10]. A key observation made in [10] is that conformal R-modules are exactly the same as continuous (called conformal in [5]) modules over the extended annihilation algebra. The following proposition extends this result in our context and is proved using Proposition 2.8. a module M such that for every v ∈ M and every a ∈ R, (y K a).v = 0 only for a finite number of K. The equivalence between the two structures is provided by the following relations: Definition 3.5. We say that a conformal R-module M is coherent if the action of the ideal Thanks to Proposition 3.4, a coherent R-module is precisely a continuous module over g(R). One checks directly that a conformal R-module M is coherent if and only if it satisfies the following property: for every a ∈ R and all K such that deg( Definition 3.6. The conformal dual M * of a conformal R-module M is defined as , and with the following λ-action of R: Here by p(f ) we denote the parity of the map f λ . Proof. We need to check that properties (M1) and (M2) hold for M * . We have: Besides, (M1) thus follows. As for (M2), we have: Let M K be the subspace of (λ) ⊗ M spanned by all elements of the form λHv for all H = K, ℓ(H) ≥ ℓ(K) and all v ∈ M . We have Therefore and the proof is complete by the observation following Definition 3.5. Proposition 3.8. Let T : M → N be a morphism of conformal R-modules i.e. a linear map such that: then the map T * : N * → M * given by: and Let us verify property (1) for T * . We have and Besides we have and Property (2) for T * follows from property (2) for T . Proposition 3.9. Let M be a conformal R-module which is free and finitely-generated as a Proof. The fact that M * is a free [∂]-module with basis {m * i } is an easy verification. By definition we have: The statement follows. Remark 3.10. We point out that, by Proposition 3.9, the map provided that M is free and finitely generated as [∂]-module. Remark 3.11. Let M, N be free and finitely generated conformal R-modules and T : M → N be a conformal morphism. In [5,Proposition 2.4] it is shown that if R is of type (1, 0), T is injective and N/ImT is free as F[∂]-module, then T * : N * → M * is surjective and that the injectivity of T is not sufficient. The same argument applies also for R of type (r, s). On the other hand, it is easy to check that if T is a surjective morphism of conformal modules then T * is always injective. Recall that we always assume that Assumptions 3.3 on R are satisfied, in particular, g(R) = ⊕ j≥−3 g(R) j is a Z-graded Lie superalgebra of depth at most 3 with g(R) −1 and g(R) −3 purely odd, and g(R) −2 identified with D = ∂ y 1 , . . . , ∂ y r+s . Let F be a finite-dimensional g(R) 0 -module which we extend to g(R) ≥0 = j≥0 g(R) j by letting g(R) j , j > 0, act trivially. We let be the generalized Verma module, attached to F . Since, by our assumptions, For the second summand we can also apply the induction hypothesis so that Definition 3.13. If g is a Lie superalgebra, ϕ : g → gl(V ) is a representation of g and x → χ x ∈ F is a character of g, we let ϕ χ : g → gl(V ) be given by It is clear that ϕ χ is still a representation and we call it the χ-shift of ϕ. In particular, if V is any g-module we call the χ-shifted dual of V the χ-shift of the dual representation V * . More explicitly, if {v 1 , . . . , v n } is a basis of V and {v * 1 , . . . , v * n } is the corresponding dual basis of V * the χ-shifted action on V * is given by (19) x for all x ∈ g and h = 1, . . . , n. Now let U = U (g(R) <0 ) and observe that U is a graded g(R) 0 -module by adjoint action. Let d = deg d Ω and consider the homogeneous component of degree d of U In other words the character ρ is uniquely determined by the condition (20) [ Lemma 3.14. Let x ∈ g(R) 0 . Then where str denotes the supertrace of an endomorphism of a vector superspace. We let {∂ * y 1 , . . . , ∂ * y r+s } be the basis of g(R) * −2 dual to {∂ y 1 , . . . , ∂ y r+s }. We let Proof. Recall that for all a ∈ R we have for some polynomials P I,k;J,h (λ, ∂). Let y S a ∈ g(R) 0 and observe that deg a ≤ 0 is even. We make use of (22) to compute the action of y S a on F Ω . We point out that P I,k;Ω,h (λ, ∂) = 0 only if I = Ω. Indeed, if y M a ∈ g(R) <0 then y M a ∈ g(R) −2 and so we have ≥0 this follows by applying Lemma 3.12. In order to compute the polynomials P Ω,k;Ω,h (λ, ∂) we notice that by (22) and Lemma 3.12 we have Substituting (y N a).v k = h v * h ((y N a).v k )v h in the previous formula we deduce that By Proposition 3.9 it follows that Observe that in each summand of the last sum we have a factor v * h (y N a v k ) so that we can assume p(v h ) + p(v k ) = p(y N a) in order to simplify the sign Now recall that the action of y S a ∈ g(R) 0 on (d Ω v h ) * can be obtained from (23) thanks to Proposition 3.4. This immediately implies that F Ω is a g(R) 0 -submodule of M (F ) * . More precisely, for y S a ∈ g(R) 0 we have: Recalling that iM ∼ S implies y i y M = ε i,M ;S y S we have ∂ y i (y S ) = ε i,M ;S m i (S)y M and so we have by Lemma 3.14. The result follows by (19). It follows that For the 1-dimensional g(R) 0 -module χ given by (21), we denote the χ-shifted dual of F by F ∨ . The following is our main result. Theorem 3.17. Let R be a Lie conformal superalgebra of type (r, s) satisfying Assumptions Proof. We use the same notation as above and consider the basis {v * 1 , . . . , v * ℓ } of F ∨ dual to the basis {v 1 , , . . . , v ℓ }. By Theorem 3.15 and Proposition 3.16, we can define a morphism ϕ : M (F ∨ ) → M (F ) * of g(R)-modules by extending the g(R) ≥0 -modules isomorphism ϕ : F ∨ → F Ω given by ϕ(v * k ) = (d Ω v k ) * for all k = 1, . . . , ℓ, in the following natural way: for all u ∈ U (g(R)) and v ∈ F ∨ . Since M (F ∨ ) and M (F ) * are free [∂]-modules of the same rank, we will prove that ϕ is in fact an isomorphism by showing that it is surjective. To this aim it is sufficient to show that the g(R)-submodule S of M (F ) * generated by F Ω is the whole M (F ) * . Let a ∈ R be an element of negative odd degree, say deg a = −2g + 1. Recall that by Lemma 3.12 if deg(y M a) > 0 we have In particular, if J = {j 1 , . . . , j r } with j 1 < j 2 < · · · < j r and |J| ≥ |I|, then It follows that We show by reverse induction on |I| that S contains all elements (d I v k ) * . The first step of the induction consists of the elements in F Ω which belong to S by definition. So let I Ω and let J be such that I J and |J| = |I| + 1. If J = {j 1 , . . . , j r } there exists i such that I = J \ {j i }. By induction hypothesis all elements (d J v h ) * ∈ S. Now we observe that there exist a ∈ R and N such that d j i = y N a. Indeed, by definition of g(R), d j i can be expressed as follows: Let N be such that y N is a non-zero common multiple of all y Mr . Then we can write where β r are suitable constants and I r are suitable sequences of indices. By Equation (25), observing that α j i ,N (a) = 1, we have This completes the proof. 4. The Lie conformal superalgebra of type W . We denote by W (r, s) the Lie superalgebra of derivations of [[x]], where, as usual, x = (x 1 , . . . , x r+s ) and x 1 , . . . , x r are even and x r+1 , . . . , x r+s are odd variables. In this section we realize W (r, s) as the annihilation superalgebra associated to a Lie conformal superalgebra of type (r, s) which satisfies Assumptions 3.3. Definition 4.1. We denote by RW (r, s) the free [∂]-module with even generators a 1 , . . . , a r and odd generators a r+1 , . . . , a r+s , all of degree −2, and λ-bracket given by [a iλ a j ] = (∂ i + λ i )a j + a i λ j and extended on the whole RW (r, s) by properties (6) and (7) Proof. It is sufficient to verify that conformal skew-symmetry and Jacobi identity hold for the generators a i . We first verify the conformal skew-symmetry. We have The Jacobi identity can be verified similarly. Namely we have: The fact that RW (r, s) satisfies Assumptions 3.3 is straightforward. In this notation we have the following two sequences of morphisms of generalized Verma modules: All Lie superalgebras (26) are Z-graded by their principal gradation described in [11], and these gradations are induced from Z-gradations of the corresponding Lie conformal superalgebras. For example the principal Z-gradation of W (1, n) and S(1, n) is defined as in Proposition 4.3, and it is induced from the Z-gradation of RW (r, s) from Proposition 4.2. The Lie superalgebras K ′ (1, 4) and E(1, 6) with their principal gradation are Z-graded subalgebras of K(1, 4) and K(1, 6) respectively, with the same non-positive part. Hence the shift character in these cases is given by (30) with n = 4 and 6, respectively. Note also that on the central element c of the universal central extension of K ′ (1, 4) the shift character vanishes. This description of the shift character shows that the complexes of degenerate Verma modules described in [3], [1] and [4] for K(1, n), K ′ (1, 4), and E(1, 6), respectively, are mapped to each other under duality described by Theorem 3.17. Of course, in order to be able to apply Theorem 3.17, we need to check that the Lie superalgebras in question are annihilation superalgebras of Lie conformal superalgebras, and that Assumptions 3.3 hold. But this holds for K(1, n) and K ′ (1, 4) by [13] and for E(1, 6) by [9]. In this section we introduce, following [11], the exceptional linearly compact infinite-dimensional Lie superalgebra E(5, 10) and realize it as the annihilation superalgebra of a Lie conformal superalgebra of type (5, 0). Remark 6.2. One can verify that the λ-bracket defined on RE(5, 10) satisfies the conformal skew-symmetry in Definition 2.1 but it does not satisfy the conformal Jacobi identity. Note that the λ-bracket [∂ x i λ ∂ x j ] differs in sign from that given in Definition 4.1, but is more natural in this context. In order to explain how Definition 6.1 arises we make a short detour on formal distribution algebras. We adopt the same notation and terminology as in [15]. All variables in this context are even. For this reason we indicate monomials with the usual multi-exponent notation, namely, for N = (n 1 , . . . , n 5 ) and x = (x 1 , . . . , x 5 ), x N = x n 1 1 · · · x n 5 5 . For two variables w, z define the formal δ-function Recall that it satisfies the following properties: for any formal distribution f (z). We let w = (w 1 , . . . , w 5 ), z = (z 1 , . . . , z 5 ) and λ = (λ 1 , . . . , λ 5 ), and introduce the formal δ-function in 5 variables (the discussion below holds for any finite number of variables) The properties of δ(z − w) imply similar properties of it: for any formal distribution f in 5 variables, in particular: Proof. Using Equations (33) and (32) we have Now we consider the free F[[x 1 , . . . , x 5 ]][x −1 1 , . . . , x −1 5 ]-module A 5 with basis ∂ x i , i = 1, . . . , 5, and d ij , 1 ≤ i < j ≤ 5. Define the skew-supersymmetric bracket on A 5 by the same formulas as for E (5,10). Then A 5 is a superalgebra containing E(5, 10) as a subalgebra, but the bracket on A 5 does not satisfy the Jacobi identity. Recall that two formal distributions a(z), b(z) ∈ A 5 [[z ±1 ]], i.e. bilateral series with coefficients in A 5 in the variables z 1 , . . . , z 5 , are called local if and only a finite number of c N (w) are nonzero. We say in this case that [a(z) (N ) b(z)] = c N (z) is the N -product of a(z) and b(z) and we define their λ-bracket by It follows from the lemmas below that these formal distributions are pairwise local. In particular Proof. We have Lemma 6.6. We have and so Proof. We have, by Lemma 6.3, N ∈ Z 5 , j, k = 1, . . . , 5, j < k} is a linear basis of A 5 , we have that P i (∂ z )δ(x − z) = 0 and Q jk (∂ z )δ(x − z) = 0 and the result follows. By Lemma 6.7 we can identify the F[∂]-module RE(5, 10) with the F[∂ z ]-module Z. Moreover the λ-brackets on RE(5, 10) correspond to brackets of the corresponding formal distributions in Z via the formal Fourier transform thanks to Lemmas 6.4, 6.5 and 6.6. We restrict our attention to the following subspace of RE(5, 10): Definition 6.8. Let RE(5, 10) be the F[∂]-submodule of RE(5, 10) generated by the following elements: One can also give the following abstract presentation of RE(5, 10). Proposition 6.9. RE(5, 10) is generated as a F[∂]-module by elements a ij , b k , where i, j, k ∈ {1, 2, 3, 4, 5}, subject to the following relations: (1) a ij + a ji = 0; It is a simple verification that the generators a ij and b k satisfy the stated relations. By construction the elements d jk (j < k) and ∂ x i are free generators of RE [5,10]. Assume we have a relation Using relation (1), we can assume Q ij = −Q ji and in particular Q ii = 0. Then we have In particular we have for all h = k that P h (∂)∂ k = P k (∂)∂ h and hence there exists a polynomial P (∂) such that P k (∂) = P (∂)∂ k for all k, hence the relation involving the b k 's in (34) is a consequence of relation (3). Using relation (2) systematically, we can assume that if i < j the polynomial Q ij is actually a polynomial in the variables ∂ h with h ≤ j. With this assumption we show that all polynomials Q ij vanish by induction on the lexicographic order of the pair of indices (i, j). and since Q ih is not a polynomial in ∂ j for all j > h we deduce that Q ih = 0. In the next result we compute the λ-brackets among generators of RE(5, 10). Proposition 6.10. We have • [a ij λ a rs ] = λ j λ r a si + λ i λ r a js + λ i λ s a rj + λ j λ s a ir + λ i ∂ z j a rs + λ j ∂ z i a sr ; Proof. Using properties (6), (7), (8) of Definition 2.1 and Definition 6.1, we have: Using four times this relation we obtain In order to compute [a ij λ b k ] we first assume that i, j, k are distinct. We have Now we assume that i, j, k are not distinct and with no lack of generality we can assume that k = i = j. We have In the last sum all terms with s = i cancel out and so we have Let h, k, l be such that ε ijhkl = 1. We have: Theorem 6.11. The F[∂]-module RE(5, 10) with the λ-bracket induced from RE(5, 10) and given in Proposition 6.10 is a Lie conformal superalgebra of type (5, 0). Proof. By Proposition 6.10 the λ-bracket actually restricts to a linear map (5,10) satisfying conformal sesquilinearity and conformal skew-symmetry. We need to prove the conformal Jacoby identity for the triple (a, b, c) in the following four cases (1) (a, b, c) = (a ij , a rs , a mn ); (1) Since the elements a ij belong to the submodule of RE(5, 10) generated by the elements ∂ x i 's it is enough to consider the Jacobi identity for elements ∂ x i , ∂ x j , ∂ x k and this can be verified as in Proposition 4.2. (2) We have to show that For distinct i, j, k let r, s be such that ε ijkrs = 1. We have: Similarly we can compute So we have (3) First assume i, j, r, s, k are distinct. We have and finally and so the conformal Jacobi identity follows also in this case. The other cases can be carried out similarly. Proof. Consider the elements a ij y R with |R| = k which generate g(RE(5, 10)) 2k−4 . Since a ij = −a ji we can always assume that i < j. We show that the set To show this we choose a total order on the monomials a ij y R with i < j such that • if h ≤ i < j ≤ k then a ij y R a hk y S for all R, S. • if i < j < k then a ij y R ≺ a jk y S for all R, S. We prove that for every element a ij y R that is not in B there exists a relation which expresses it as a linear combination of smaller elements. Let a ij y R / ∈ B. If j − i > 2 we let i < h < j and we have Let us consider the case j = i + 1. If y i , y i+1 do not divide y R we have where h is any index distinct from i, i + 1. So we can assume that i > 1 and that y i+1 is a divisor of y R but y i is not (otherwise a i i+1 y R ∈ B). We have So the dimension of g(RE(5, 10)) 2k−4 is less than or equal to the cardinality of B, i.e. Theorem 6.14. The annihilation algebra g(RE(5, 10)) is isomorphic, as a Z-graded Lie superalgebra, to E(5, 10) with principal gradation under the following map Φ: Proof. Let R = RE(5, 10). By construction and Proposition 6.9, g(R) is spanned by the elements a ij y M and b k y M subject to the following relations: In particular a ij y 0 = 0 and b k y 0 = 0. It is easy to check that relations (i), (ii), (iii) are preserved by the map Φ and that Φ is surjective. Injectivity of Φ follows from Corollary 6.13. The proof that Φ is an homomorphism is a straightforward verification based on the following observation [a ij y M , a rs y N ] = a si (∂ y j ∂ yr y M )y N + a js (∂ y i ∂ yr y M )y N + a rj (∂ y i ∂ ys y M )y N + a ir (∂ y j ∂ ys y M )y N − a rs ∂ y j ((∂ y i y M )y N ) − a sr ∂ y i ((∂ y j y M )y N ). The following corollary answers a question raised in [6]. Corollary 6.15. Let F be a finite-dimensional sl 5 -module and let M (F ) be the corresponding E(5, 10)-Verma module. Then M (F ) * is isomorphic to M (F * ). In this section we introduce the infinite-dimensional (linearly compact) Lie superalgebra E(3, 6) and realize it as the annihilation superalgebra of a Lie conformal superalgebra of type (3,0). This is used to show that M (F ) * = M (F * ) for every finite-dimensional representation F of E(3, 6) 0 . We consider on E(3, 6) the principal gradation given by deg The strategy to construct the Lie conformal superalgebra Rg for g = E(3, 6) and E(3, 8) (see Section 8), such that the annihilation Lie superalgebra of Rg is g, is the same as for g = E(5, 10) in Section 6 (see also the Introduction). Namely, we construct a formal distribution Lie superalgebrag by localizing the formal power series in x 1 , x 2 , x 3 by these variables, and show thatg is spanned by the coefficients of pairwise local formal distributions a i (z). Next, we compute the brackets [a i (z), a j (w)] as linear combinations of the delta function and its derivatives with coefficients the formal distributions a k (w) and their derivatives. Applying the formal Fourier transform, we obtain the Lie conformal superalgebra Rg of type (3, 0). The annihilation Lie superalgebra of Rg has a canonical surjective map Φ to g, and it remains to show that the kernel of Φ is zero. In the case when all the coefficients of all the formal distributions a i (z) are linearly independent, like for g = E(3, 6), this is immediate. But for E(5, 10) and E (3,8) this does not hold, and it requires some effort to prove that Φ has zero kernel. (The case of Lie conformal superalgebras of type (1, 0) is easy since any finitely generated module over F[∂] is a direct sum of a free module and a torsion module.) So let g = E(3, 6) andg be defined as above. In order to construct the conformal superalgebra RE(3, 6) we consider the following formal distributions in the variable z with coefficients iñ g: a i (z) = δ(x − z)∂ x i , c(z) = δ(x − z) ⊗ c, and b hk (z) = δ(x − z)dx h ⊗ e k , for all i, h = 1, 2, 3, k = 1, 2, c ∈ sl 2 . The computation of the formal Fourier transform of the brackets between these formal distributions leads us to the following definition.
Identification and characterization of immature Anopheles and culicines (Diptera: Culicidae) at three sites of varying malaria transmission intensities in Uganda Background Over the last two decades, there has been remarkable progress in malaria control in sub-Saharan Africa, due mainly to the massive deployment of long-lasting insecticidal nets and indoor residual spraying. Despite these gains, it is clear that in many situations, additional interventions are needed to further reduce malaria transmission. The World Health Organization (WHO) has promoted the Integrated Vector Management (IVM) approach through its Global Vector Control Response 2017–2030. However, prior roll-out of larval source management (LSM) as part of IVM, knowledge on ecology of larval aquatic habitats is required. Methods Aquatic habitats colonized by immature Anopheles and culicines vectors were characterized at three sites of low, medium and high malaria transmission in Uganda from October 2011 to June 2015. Larval surveys were conducted along transects in each site and aquatic habitats described according to type and size. Immature Anopheles, culicines and pupae from the described habitats were sampled using standard dipping methods to determine larval and pupae densities. Larvae were identified as anopheline or culicine, and counted. Pupae were not identified further. Binary logistic regression analysis was used to identify factors associated with the presence of immature Anopheles and culicines in each site. Results A total of 1205 larval aquatic habitats were surveyed and yielded a total of 17,028 anopheline larvae, 26,958 culicine larvae and 1189 pupae. Peaks in larval abundance occurred in all sites in March–May and August-October coinciding with the rainy seasons. Anopheles larvae were found in 52.4% (n = 251) of aquatic habitats in Tororo, a site of high transmission, 41.9% (n = 536) of habitats in Kanungu, a site with moderate malaria transmission, and 15.8% (n = 418) in Jinja, a site with low malaria transmission. The odds of finding larvae was highest in rice fields compared to pools in both Tororo (odds ratio, OR = 4.21, 95% CI 1.22–14.56, p = 0.02) and Kanungu (OR = 2.14, 95% CI 1.12–4.07, p = 0.02), while in Jinja the odd were highest in containers (OR = 4.55, 95% CI = 1.09–19.14, p = 0.03). In Kanungu, larvae were less likely to be found in containers compared to pools (OR = 0.26, 95% CI 0.09–0.66, p = 0.008) and river fringe (OR = 0.19, 95% CI 0.07–0.52, p = 0.001). Medium sized habitats were associated with high odds of finding larvae compared to small habitats (OR = 3.59, 95% CI 1.18–14.19, p = 0.039). Conclusions These findings show that immature Anopheles and culicines were common in areas of high and moderate transmission but were rare in areas of low transmission. Although immature Anopheles and culicines were found in all types of water bodies, they were most common in rice fields and less common in open drains and in river fringes. Methods are needed to reduce the aquatic stages of anopheline mosquitoes in human-made habitats, particularly rice fields. Background Between 2000 and 2015 there has been remarkable progress in malaria control in sub-Saharan Africa mainly due to the massive deployment of insecticide-treated nets (ITNs), indoor residual spraying (IRS) and prompt treatment with artemisinin-based combinations [1]. It is clear though that in many places this combination of interventions is not sufficient, especially when addressing outdoor transmission. The World Health Organization (WHO) has called for new approaches using the most effective tools in a more targeted way to prevent disease and save lives in countries hardest hit [2]. In Uganda, pyrethroid resistance is widespread and likely to undermine the impact of ITNs that use unmodified pyrethroids [3][4][5][6][7]. Anopheles mosquitoes have also developed resistance to some of the insecticides commonly used IRS [7]. Malaria control may be further hindered if large-scale deployment of ITNs and IRS changes vector behavior from biting indoors to outdoors, biting times and species composition [8,9]. As a result, there is a need for alternative control interventions to reduce the force of malaria infection. In the last decade, WHO has promoted the Integrated Vector Management (IVM) approach through its Global Vector Control Response 2017-2030 [2,10]. In this multi-sectoral approach, multiple control tools are combined to improve their efficacy, cost-effectiveness and sustainability. LSM targets immature mosquito populations by removing standing water, flushing aquatic habitats, or adding insecticides, microbial larvicides or natural predators to standing water to kill larvae [11][12][13]. Adult Anopheles control, complemented by larval control can significantly reduce malaria transmission in sub-Saharan Africa [13][14][15][16] and has been recommended to reduce outdoor transmission [17]. LSM has been incorporated in Integrated Vector Management (IVM) as a malaria control policy in Uganda [18] and scaling LSM in Uganda is highly recommended [19]. Effective implementation of LSM requires knowledge about Anopheles habitats. Malaria vector control programmes in Uganda have mainly targeted adult stages of the vector. Because of this less attention has been given to studying and characterizing habitats of immature Anopheles stages. Although malaria control in Uganda has relied heavily on adult vector control, few studies have characterized Anopheles aquatic habitats in the country over the past 20 years. This study was designed to describe key Anopheles and culicine larval habitats and factors associated with larval abundance at three sites of varying malaria transmission in Uganda. The results of this study should be useful in planning and implementing larval management strategies. Study sites The study was carried out in two rural sub-counties (Nagongera and Kihihi) and one peri-urban sub-county (Walukuba) (Fig. 1). Nagongera sub-county is located in Tororo district (00° 46′ 10.6″, N 34° 01′ 34.1″ E). At the time the study was initiated, Tororo was an area of intense malaria transmission [20], although transmission has been greatly reduced following the implementation of IRS starting in December 2014 [8,21,22]. Tororo is situated at an elevation of 1185 m above sea level, and houses are constructed on low-lying hills. It is an area with savannah grassland interrupted by bare rocky outcrops and low-lying wetlands. Unproductive sandy soils are the most common, which tempts farmers to cultivate in and around wetlands specifically rice growing [23]. Other crops grown include; maize, cassava, sweet potatoes, sorghum, groundnuts, soya beans, beans, and millet. At the time the study was initiated, the major malaria vector species in Tororo were Anopheles gambiae sensu stricto (s.s.) and Anopheles funestus with small numbers of Anopheles arabiensis [24]. Kihiihi is one of the sub-counties in Kanungu district (00° 45′ 03.1″ S, 29° 42′ 03.6″ E). Kanungu is an area of moderate malaria transmission [20]. It is situated in rolling hills at an elevation of 1310 m above sea level. The main activity in Kanungu is agriculture, where farmers grow bananas, millet, rice, cassava, potatoes, sweet potatoes, tomatoes, maize, groundnuts, and beans. The main vector is An. gambiae s.s. [24]. Walukuba is a peri-urban sub-county in Jinja district (00° 26′ 33.2″ N, 33°13′ 32.3″ E). Jinja is a town with low malaria transmission and is situated at an elevation of 1215 m above sea level, close to a swampy area near Lake Victoria [20]. The major malaria Conclusions: These findings show that immature Anopheles and culicines were common in areas of high and moderate transmission but were rare in areas of low transmission. Although immature Anopheles and culicines were found in all types of water bodies, they were most common in rice fields and less common in open drains and in river fringes. Methods are needed to reduce the aquatic stages of anopheline mosquitoes in human-made habitats, particularly rice fields. Habitat definitions Aquatic habitats were defined using a method that was described previously in the Gambia [25]: (1) freshwater marsh (swamp), was a large water body containing vegetation and tall papyrus, (2) river fringe, was the shallow edge of a permanent stream, associated with grass and tall reeds in deeper parts, (3) puddle was a small natural water-filled depression, (4) pool was a large manmade depression holding water, (5) water channel was an open flowing water used for irrigation, (6) foot print was a depression made by the foot of a person, cow or other animal where water collects, often associated with edges of large water bodies, (7) tire track was a waterfilled depression made by a vehicle, (8) artificial pond was a large human-made permanent water body, (9) sand pit was a depression made after extraction of sand or bricklaying, (10) container was a discarded plastic or metal waste, (11) pit latrine was any hole used as a toilet containing water, (12) rice field was a flooded area used to grow rice, and (13) open drain was man-made and constructed for the purpose of getting rid of water (14) Lake fringe was the shallow edge of a lake, (15) flood water was a large natural water-filled depression a rising especially after heavy rains. Larval surveys As part of ongoing cohort studies, 100 households were enrolled in each of the three study sites. The households were used as starting points for making transects. From each household, a transect 20 m wide was walked downhill until a maximum length of approximately 1 km or until a large permanent aquatic habitat was reached. Larval surveys were carried out using classical larval prospection in 3-5 transects per site per month to assess the presence of potential water bodies in transects. Potential larval habitats were described morphologically (type and size) as defined above and geo-referenced using a GPS device (Garmin GPS series GPSMAP ® 62.2.3). Purposeful sampling was done to maximize collection of the aquatic stages of mosquitoes. All aquatic habitats were sampled for the presence of anopheline and culicine larvae and pupae. Once viewed, mosquito larvae and pupae were collected using a 350 ml dipper (Clarke Mosquito Control Products, Roselle, IL). Plastic transfer pipettes were used to collect larvae and pupae in very small habitats where dippers could not be used. At each water body, a maximum of 10 dips were made to sample locations likely to harbour mosquito larvae, such as around tufts of submerged vegetation or substrate, edges of water bodies, and around floating debris. If transects included a water channel, river, or stream (long habitats) then measurements were made every 10 m along the water body. To avoid making several collections from the same habitat, a maximum of 2 measurements were made and, therefore, up to 20 dips per long habitat were sampled for mosquito larvae. The size of the water body was estimated visually and grouped into < 10 m in perimeter (small habitats), 10-100 m in perimeter (medium habitats), or > 100 m in perimeter (large habitats). Water from aquatic habitats collected by dippers was emptied into a white basin and checked for mosquito larvae and pupae. Specimens were identified morphologically to genus level, using the anopheline larvae morphological identification keys developed by Holstein in 1949 [26]. Mosquito larvae were recorded as anopheline or culicine and either early (L1-L2) or late (L3-L4) instars. The pupae were not identified further. The finding of at least one larva or pupa was sufficient to record a larval habitat as occupied (effective breeding site); no further quantitative estimates were made. Rainfall data Monthly rainfall data was obtained for the period of [27]. The data were aggregated by site and averaged monthly for the three sites. Data management and analysis Data were double entered into a Microsoft Access database and analysed using R statistical software [28]. Binary logistic regression analysis was used to determine variables associated with the presence or absence of Anopheles larvae at the three sites. Presence of Anopheles larvae was used as the dependent variable and habitat size and habitat type as the independent variables. Initial analyses indicated that late-instar Anopheles larvae counts were strongly correlated with early instar counts (r 2 = 0.833, p < 0.001) at all three sites, and these data were pooled together for further analysis. Pools were used as baseline habitats since they appeared in considerable numbers in all the three sites. To determine the habitats most productive for larvae, habitats in which Anopheles and culicine larvae were found were expressed as percentages of all habitats sampled. The relative abundance of Anopheles per habitat was calculated as the number of larvae divided by the number of dips taken from each larval habitat. Regression analysis was used to determine factors affecting larval relative abundance (y) after log-transforming log10 (y + 0.5) to stabilize the variance and improve normality of distribution. Correlation analysis was used to investigate the relationship between anopheline and culicine larvae and between Anopheles early and late instars in aquatic habitats. Statistical significance was set at a p value of < 0.05. Larval habitat types identified The results of larval habitats surveyed and the number of larval habitats classified by one of fifteen types are presented in Fig. 2. The habitat types varied from site to site, but since pools were common in all three sites Tororo (n = 13), Kanungu (n = 62) and Jinja (n = 64), they were used as a reference habitat in the analysis. In Jinja, the most common aquatic habitats were water channels 42.1% (n = 176) and pools 15.3% (n = 64), in Kanungu water channels 23.3% (n = 125) and freshwater marshes 19.4% (n = 104) were the most common, while in Tororo, rice fields 40.6% (n = 112) and water channels 40.2% (n = 101) were the most common. Abundance of Anopheles larvae in aquatic habitats The proportion of aquatic habitats found with Anopheles larvae varied significantly according to site (p < 0.001). A total of 1205 aquatic habitat types were characterized and sampled for mosquito larvae and pupae: 251 habitats in Tororo, 536 in Kanungu and 418 in Jinja (Table 1). A total of 17,028 anopheline larvae, 26,958 culicine larvae and 1189 pupae were collected at the three sites. Stratified by site; Jinja had 436 anopheline larvae, 1635 culicine larvae and 374 pupae. Kanungu had 11,257 anopheline larvae, 15,265 culicine larvae and 164 pupae, while Tororo had 5335 anopheline larvae, 4622 culicine larvae and 651 pupae. Anopheles larvae were found in 15.3% (64/418) of aquatic habitats in Jinja. The most productive habitats for Anopheles larvae were foot prints and containers in which Anopheles larvae were found in 22.2% (n = 27) and 20% (n = 10), respectively of aquatic habitats sampled. The most productive habitats for culicine larvae were open drains and containers in which culicine larvae were found in 28.9% (n = 45) and 40% (n = 10), respectively of aquatic habitats sampled (Table 2). Anopheles larvae were found in 41.8% (224/536) of aquatic habitats in Kanungu. The most productive habitats for Anopheles larvae were rice fields and freshwater Fig. 2 Showing the mean Anopheles larval abundance of habitats at three sites mashes in which Anopheles larvae were found in 71.8% (n = 39) and 67.3% (n = 104) respectively of aquatic habitats sampled. Likewise, the most productive habitats for culicine larvae were rice fields and freshwater mashes in which culicine larvae were found in 61.5% (n = 39) and 65.3% (n = 104) respectively of aquatic habitats sampled (Table 3). Anopheles larvae were found in 52.6% (132/251) of aquatic habitats in Tororo. The most productive habitats for Anopheles larvae were rice fields and pools in which Anopheles larvae were found in 70.5% (n = 112) and 46.4% (n = 13), respectively of aquatic habitats sampled. Likewise, the most productive habitats for culicine larvae were rice fields and pools in which culicine larvae were found in 57.1% (n = 112) and 46.1% (n = 13), respectively of aquatic habitats sampled (Table 4). Human-made habitats were the most contributors of Anopheles larvae at all the three sites. In Jinja, containers (n = 10), foot prints (n = 27) and pools (n = 64) were among the top five of aquatic habitats found with Anopheles larvae with proportions of 20%, 22.2% and 12.5% respectively. In Kanungu, rice field (n = 39), artificial ponds (n = 18) and foot prints (n = 24) were among the top five of aquatic habitats found with Anopheles larvae with proportions of 71.8%, 55.2% and 45.8% respectively. In Tororo, rice field (n = 112), pools (n = 46.4) and foot prints (n = 4) were among the top five of aquatic habitats found with Anopheles larvae with proportions of 70.5%, 46.4% and 25%, respectively (Tables 2, 3, 4). Larval densities for each habitat type and size at the three sites were highly variable (Tables 5, 6, 7). In Jinja, high larval densities were found in open drains and medium sized habitats with larval densities of 1.7 (0.81-2.73) and 0.81(0.23-1.39) respectively. In Kanungu high larval densities were found in rice fields and large sized habitats with larval densities of 8.63 (6.99-9.26) and 2.09 (1.47-2.71), respectively, while in Tororo high larval densities were found in puddles and medium sized habitats with larval densities of 3.07 (1.61-4.54) and 2.24 (1.43-3.04), respectively. Despite these habitats being most prevalent, the mean Anopheles larvae count in different habitat sizes varied from site to site. In Tororo and Jinja, higher larvae count per habitat were obtained in small habitats (< 10 m in perimeter) followed by medium habitats (10-100 m in Fig. 3 Showing contribution of different habitat sizes to Anopheles and culicine larvae abundance perimeter), while in Kanungu, higher larvae count per habitat were obtained in large habitats (> 100 m in perimeter) followed by medium habitats (Fig. 3). Effect of rainfall on immature Anopheles and culicines abundance The relationship between larval abundance and rainfall, are presented in Fig. 4. Overall, there was no clear relationship between rainfall and number of larvae found in the habitats at all sites. Important to note is that dry months (January-March and June-August) yielded low numbers of Anopheles larvae. Discussion Understanding habitat ecology such as the most productive habitat, habitat size and habitat type helps with larval control programmes. Larval control programmes may include larviciding or alternative methods of control, e.g. improving drainage ditches, filling sand pits, and other means of making habitats unavailable, which vary by habitat type or size. This study compared ecologies of mosquito larvae at sites of varying malaria transmission intensities in Uganda. Habitat types were driven by the economic activity of the sites and thus human made and small habitats of < 10 m in perimeter contributed to most of aquatic habitats found with immature Anopheles and culicines. Furthermore, anophelines and culicines always occupied the same aquatic habitats and were influenced by rainfall. A broad diversity of aquatic habitats was surveyed in the study sites. There was a considerable differences in aquatic habitats with anopheline larvae found ranging from 22.2% (n = 27) for footprints in Jinja, 71.8% for rice fields in Kanungu (n = 39) and 70.5% (n = 112) for rice field in Tororo. However, these figures should be interpreted with caution since sampling dates and seasons were variable hence direct comparisons could not be made. Although Anopheles larvae were found in 22.2% of footprints sampled in Jinja, an area of low transmission, water channels accounted for most of the Anopheles larvae collected because they were the most common habitats (n = 176). Jinja is a semi-urban area located near the shores of Lake Victoria and, therefore, likely to have many water channels due to water flowing from shores of Lake Victoria. These channels therefore have constant supply of water throughout the year that favors larvae growth. Likewise, most Anopheles larvae collected in Kanungu were from freshwater marshes because they were the most abundant (n = 104). These fresh water marshes rarely dry out and therefore act as permanent breeding sites for Anopheles throughout the year. In Tororo district, an area of intense transmission, most larvae were collected in rice fields (n = 112). The differences in habitat types by site could be partly explained by differences in economic activities and geography of these sites. Kanungu is a low-lying area with many swamps and rivers as well as rice growing; and in Tororo rice growing is common and there is always need to divert water from rivers to the rice fields to support rice growing especially during the dry season. The prevalence of immature Anopheles and culicines in aquatic habitats in the three sites mirrored malaria transmission reported at these sites [20,29]. Jinja an urban area with low malaria transmission, had the lowest proportion of aquatic habitats found with Anopheles larvae (15.3%), Kanungu a rural area, with moderate transmission, immature Anopheles and culicines were found in close to half of aquatic habitats surveyed (41.9%), while Tororo district, is a rural area of high malaria transmission, immature Anopheles and culicines were found in slightly more than half of the aquatic habitats surveyed (52.4%). This could probably be due to number of larvae in aquatic habitats translate into adult mosquito densities and therefore the relation between larval densities, adult mosquito densities and malaria transmission. Epidemiological, entomological and parasitological studies have demonstrated similar trends using test positivity rates (TPR) of malaria parasites and daily human biting rates by adult Anopheles mosquitoes collected in these study area [20,[29][30][31]. A particularly important finding is the key role of rice fields in the production of immature Anopheles and culicines as observed in both rural areas. Rice fields produce prodigious numbers of immature Anopheles and culicines [32,33], but this does not necessarily lead to increased malaria transmission [32]. Rice growing in Kihiihi and Nagongera is a well-known agricultural activity in these areas and is supported in the national plan [23,34] and farmers in these rice growing areas often divert water from streams and rivers into their gardens with the aim of supporting rice growing in the dry season. This in turn creates puddles and small, clear open habitats within the rice field that are favorites for An. gambiae sensu lato (s.l.) [35]. Farmers grow rice two seasons a year in these areas and this creates aquatic habitats all year round, extending the transmission season. The aquatic habitats in these areas were much varied in number and composition which made direct comparisons of larval abundance between sites difficult. In addition, the number of habitats sampled varied between habitat types hence results should be interpreted cautiously. Even though rice fields presented much a risk factor for immature Anopheles and culicines breeding in both Kanungu and Tororo, it is important to emphasize that rice fields were by far the most common aquatic habitats. In Jinja and Tororo, the habitats associated with higher odds of finding larvae did not necessary have higher larvae densities. Human-made habitats such as borrow pits, puddles and rice fields accounted for more than 50 Anopheles larvae per habitat per sampling and were thus the most productive habitats (Fig. 2). In Kanungu, brick laying and sand mining is a common activity and therefore this creates borrow pits that contain water throughout the year. This would also lead to an extension of malaria transmission throughout the year. Human activities like fishing, agriculture and brick laying have been previously reported to play important roles in creating habitats for mosquito larvae [36][37][38]. In the future, as the population of Uganda grows, there is likely to be an increase in agriculture and house construction, favouring the creation of new aquatic habitats for Anopheles. It is important to adapt larval source management (LSM) strategies to reduce the number of vectors produced from these habitats. Rainfall occurs to a greater or lesser extent throughout the year and immature Anopheles and culicines were common in the three study sites throughout much of the year. There was no strong relationship between rainfall and immature Anopheles and culicines numbers indicating that during periods of low rainfall mosquitoes continue to thrive in semi-permanent or permanent water bodies. Although targeting LSM when aquatic are few has been recommended [14], this analysis suggests that all water bodies need to be treated with larvicides and environmental management directed at particular habitats. Small aquatic habitats (less than 10 metres in diameter) were the main source of immature Anopheles and culicines collected at all sites, followed by medium aquatic habitats (10-100 m in perimeter) and lastly by large aquatic habitats (> 100 m in perimeter). This is partly good news as most small aquatic habitats are unstable and likely to dry out compared to large aquatic habitats. Unstable aquatic habitats have been shown to be less efficient in maintaining malaria transmission in western Kenya and Tanzania [39][40][41]. However, in these sites aquatic habitats mainly man-made are likely to be refilled by diverting waters from rivers and stream for purposes of supporting agriculture. This would maintain malaria transmission throughout the year. Immature Anopheles and culicines often occurred in the same water bodies. This suggests that the same aquatic habitats targeted for Anopheles larval control programmes could also be targeted for culicine larvae control programmes. Previous studies have shown that Anopheles and culicine larvae are likely to occur in the same habitats [42]. Likewise, Anopheles early and late instars were highly correlated at the three sites. There were limitations to this study. Firstly, Anopheles mosquitoes were not identified to species level. Analysis of adult mosquitoes collected in these areas in the same periods that larval surveys were done indicates that most adults were An. gambiae s.l. Out of all Anopheles mosquitoes caught, 88.5% of Anopheles mosquitoes were An. gambiae s.l. in Jinja, 99.8% in Kanungu and 93.5% in Tororo [8,20]. Secondly, counts from larval surveys do not estimate the number of larvae produced from the different habitats according to surface area of each habitat. So, although one small puddle may have high numbers of larvae, a rice field nearby with lower densities but larger surface area could be producing several orders of magnitude more larvae than the puddle simply because it is so much bigger. Conclusions Immature Anopheles and culicines occurred throughout the year in a wide range of water bodies, many of them human-made. Rice fields were particularly important sources of immature Anopheles and culicines. Larval control programmes would need to treat all aquatic habitats and use environmental management to reduce the force of malaria infection.
Effects of gallus epidermal growth factor(gEGF)from chicken embryos on growth performance, serum biochemical indices, immune function and intestinal morphology of broilers Abstract We aimed to investigate the effects of dietary supplementation with gallus epidermal growth factor (gEGF) from chicken embryos on growth performance, immunity, and intestinal morphology in broilers. 480 1-day-old AA broilers were randomly divided into 5 groups with 6 replicates of 16 chicks each. The control group was fed basal diet and other treatment diets were supplemented with 4, 6, 8, 12 ng/kg gEGF, respectively. The whole experiment lasted for 42 d. Broilers were harvested at the end of the experiment, and spleen, thymus, bursa, serum samples and small intestine were collected. Results showed that average daily growth (ADG) of 1–21d at the 4, 6 and 12 ng/kg groups were significantly increased (p < .05); ADG (22–42d, 1–42d) and ADFI of 1–42d at the 4 ng/kg group were also increased (p < .05). Dietary gEGF at 4 and 6 ng/kg groups improved catalase (CAT) activity, and total antioxidant capacity (T-AOC), glutathione peroxidase (GSH-PX) activity were significantly increased (p < .05) while malondialdehyde (MDA) concentration significantly decreased (p < .05) at 12 ng/kg group. Compared with the control group, the thymus index of the 8 ng/kg group was significantly increased (p < .05). Dietary gEGF at less than or equal to 8 ng/kg level improved Bursa of Fabricius (BF) index (p < .05). Moreover, at the 8 ng/kg group, serum immunoglobulin A (IgA) and immunoglobulin M (IgM) concentrations were significantly higher (p < .05) than in the control group. Intestinal development was enhanced by gEGF inclusion. These findings demonstrated that dietary gEGF supplementation improved growth performance, antioxidant capacity, immunity and the development of the intestine in broilers. And the suitable dosage of gEGF in broiler diets is 8 ng/kg. The gEGF has the potential to be used as a feed additive in broilers. Highlights 4 ng/kg gEGF dietary supplementation had a positive effect (P < 0.05) on growth performance of broilers. Antioxidant capacity (T-AOC, GSH-PX, CAT) of serum significantly increased, and MDA decreased at 12 ng/kg group. Dietary gEGF exhibited positive influence on immunity of broilers. Dietary gEGF improved the development of small intestinal by increasing the villus height and reducing the crypt depth of small intestine in broilers. Introduction The epidermal growth factor (EGF) was first isolated from the mouse submandibular gland more than half a century ago (S.). EGF is a single-chain polypeptide composed of 53 amino acids with a molecular weight of 6000 Da. Studies have shown that EGF has high stability to acids, alkalis and proteases, and remains stable for a long time at À20 C (Cohen and Savage 1974), which allows its delivery to the gastrointestinal tract to exert trophic effects (Clark et al. 2009). Previous studies have indicated that EGF can promote the growth, proliferation and differentiation of epidermal cells (Cohen 1965). It is also beneficial to skin wound healing (Gibbs et al. 2000) and repairing the cornea (Yan et al. 2013). EGF is one of the most abundant growth factors found in milk (Odle et al. 1996). In human colostrum, the concentration of EGF is 500 times higher than other growth factors, such as amphiregulin and transforming growth factor-alpha (TGF-a) (Nojiri et al. 2012), indicating that EGF plays a key role in the early intestinal development. Further studies have found that the concentration of EGF in the digestive tract is much higher than that in the circulation, and it has a good therapeutic effect on many gastrointestinal diseases such as necrotising colitis and gastrointestinal ulcers (Guglietta and Sullivan 1995). The dietary recombinant EGF (rEGF) addition was beneficial to intestinal morphology and immunity of jejunum in piglets including the villus height (VH) (Duh et al. 2000;Kitchen et al. 2005). The content of interleukin-3(IL-3), the number of goblet cells and the level of mucin-2 (MUC2) with dietary rEGF supplementation were increased (Warner and Warner 2005;Huai 2016; Wang et al. 2020). The diarrhoea rate and feed conversion ratio (FCR) of weaned piglets fed with rEGF decreased obviously (Wang, Xu, et al. 2014). In addition, EGF ameliorated alcohol-induced intestinal injury and promoted intestinal integrity and permeability in rats via injecting EGF. In general, mammal EGF showed benefits to growth and intestine health of animals. Recently, modern intensive poultry production has obviously been achieved. With the increase of food safety awareness, the introduction of relevant laws and regulations to control the use of antibiotics, many studies for antibiotic alternatives have started in the poultry industry. In Europe, the ban of antibiotic growth promoters in 2006 increased the incidence of certain animal infectious diseases (Santovito et al. 2018;Cheng et al. 2021). Therefore, maintaining a healthy status of the poultry gut to ensure the high efficiency is not compromised (Rhayat et al. 2017). However, knowledge concerning EGF in broilers is limited despite the availability of EGF in mammals. EGF derived from Lactobacillus lactis (EGF-LL) significantly improved the growth performance and immune function of broilers by reducing the feed conversion ratio (FCR) and increasing the value of thymus index, spleen index, immunoglobulin A (IgA), and immunoglobulin G (IgG) concentrations in serum and secretory immunoglobulin A (slgA) concentrations of the duodenum (Zhou et al. 2021), respectively. In this study, we made a preliminary analysis of the feasibility of gEGF as a feed additive in broilers and its effects on growth performance, serum biochemical and antioxidant indices, immune function and small intestinal development. gEGF extract preparation The experimental procedures were conducted with reference to the methods in a previously published patent (Lu et al. 2017). According to the preliminary results of our research team, the isolation experiments were carried out on 5-day-old chicken embryos with the highest EGF concentration. Chicken embryos were shelled and weighed, 0.05 mol/L acetic acid (4 C) was added to homogenate the tissues. The prepared homogenate was centrifuged at 10,000 Â g at 4 C for 30 min, and the supernatant was collected. The precipitation was further homogenised with 0.05 mol/L acetic acid (4 C), centrifuging at 10,000 Â g for 30 min and then combining two supernatant and removing the supernatant surface fat. Then, the sodium benzoate (25 g/L) was added to the supernatant obtained and stirred until completely dissolved. The solution was adjusted pH to 6.5 by adding acetic acid (4 C), then stirred for half an hour and filtered under reduced pressure. The precipitates were then dried at room temperature, and 20 mL acetone (4 C) was added per gram of precipitation. After standing overnight, decompression and filtration were performed. Repeating the above steps several times, the crude extract of powdered gEGF with a purity of about 80% was obtained. The purity is calculated by the percentage of EGF content in the total organic matter. The total carbon content of the accumulated organic matter was determined using total organic carbon (TOC) analyser (TOC-VCPH, Shimadzu, Japan). Finally, gEGF extract was dissolved in PBS buffer, the concentration of gEGF was determined with an Enzyme-linked immunoassay (ELISA) kit (purchased from Shanghai Enzyme Biotechnology Co., Ltd.). gEGF products used in breeding experiments The gEGF extract prepared above was dissolved and diluted with PBS and sprayed evenly on corncob powder in a certain proportion (the value of gEGF extract/ corncob powder was 7:3.). The gEGF products for feeding experiments was gained after drying naturally off. The content of gEGF in products was 8 mg/kg as measured by ELISA kits. In vitro digestion test of gEGF Artificial gastric juice Sodium chloride (2 g) was dissolved in 7 mL of hydrochloric acid and Distilled water (DW) (Kimura et al. 2016). Pepsin (activity 1:10,000; Wako Pure Chemical Industries, Ltd., Osaka, Japan) was added at a concentration of 0.1%, and DW was added to adjust the volume to 1 L (pH 2.0). One millilitre of gEGF solution was added to 24 mL of artificial gastric juice, which was then incubated at 37 C for 0, 0.5, 1.5, 2.5, 3.5 and 4.5 h, Sodium hydroxide (1 M) was added to adjust pH of the solution to 6.5. ELISA was used to determine the concentration of gEGF. Artificial intestinal juice Artificial intestinal juices were adjusted by mixing 250 mL of disodium hydrogen orthophosphate (0.2 M) with 118 mL of sodium hydroxide (0.2 M) and pancreatin (Wako Pure Chemical Industries, Ltd.) to a concentration of 0.1%, and DW was added to adjust the volume to 1 L (pH 8.0) (Kimura et al. 2016). One millilitre of gEGF solution was added to 24 mL of artificial intestinal juices, which were then incubated at 37 C for 0, 0.5, 1, 2, 3 h. Then the concentration of EGF was determined using an ELISA kit. Experimental design, animals, and diets A total of 480 one-day-old AA broilers of similar weight were randomly divided into 5 groups with 6 replicates and 16 birds of each replicate (per cage), fed with gEGF products at the following different levels in a feeding trial: 0 g/t, 500 g/t, 750 g/t, 1 000 g/t and 1 500 g/t respectively, and calculated as 0, 4, 6, 8 and 12 ng/kg gEGF (content in feed), respectively. The basal diet was formulated according to the American NRC (1994) broiler feeding standard. The composition and nutrient content of the basal diet (starter diet and grower diet) are presented in Table 1. Throughout the 42-day experiment, all birds were raised in cages and free to water and feed. The room temperature inside the cages was 34 C in the first week and reduced gradually to 2 C each week. At the completion of the study, the temperature was about 26 C. The humidity of the whole period inside the room was maintained between 40 and 60%, natural ventilation was maintained, and illumination was maintained for 18 h every day. The broilers were vaccinated with the Newcastle disease vaccine and the infectious bursal vaccine on days 7 and 14 of the experiment, respectively. Generally, the housing and care of birds was in accordance with the Guide for the Care and Use of Agricultural Animals in Research and Teaching (third edition, 2010). All birds in each replicate were weighed individually after a 12 h feed deprivation on the mornings of days 21 and 42. The consumption of the diet by birds was recorded on a replicate basis to calculate average daily feed intake (ADFI), average daily gain (ADG), and feed conversion rate (FCR) during the starter (1-21 day), grower (22-42 day), and overall (1-42 day) period. Sample collection On day 21 and 42, two birds of similar weight were randomly picked from each replicate. Blood samples were collected from the underwing vein. After standing for several hours, the serum was separated by centrifugation at 3000 rpm and stored at À80 C for further analysis. After broilers are euthanized by cutting the carotid arteries following cervical dislocation and necropsied immediately, thymus (from the left and right side of the neck), spleen and bursa of Fabricius (BF) were removed and weighed to calculate the immune organ index of each group. Organ weight was expressed in percentage relative to the individual broiler's body weight. The immune organ index calculation formula is immune organ weight (g)/pre-slaughter live weight (kg). After removing the intestinal contents with precooled saline, 1 cm intestinal segment from the middle of the duodenum, jejunum, and ileum were collected and fixed in 4% paraformaldehyde solution. For histological analysis, samples were dehydrated, fixed, sliced and haematoxylin and then eosin stained and sheet sealed. The duodenum, jejunum, and ileum tissues were sliced, and the VH, CD, and (VH/CD) of each intestinal segment were calculated, and the slices were measured and counted using the MShot Image Analysis System software. Statistical analysis Statistical analysis was performed with one-way ANOVA followed by Turkey multiple comparison tests with SPSS 19.0 (SPSS, Chicago, IL, USA). Linear, quadratic and cubic effects were tested by SPSS 19.0 and considered significant at p < .05. Data are presented as means and SEM and are considered significant at p < .05. Results The stability of gEGF in vitro digestion test The gEGF degradation in artificial gastric juices and artificial intestinal juices, by digestive enzymes, was examined. After 0.5, 1.5, 2.5, 3.5, and 4.5 h of digestion of gEGF in the artificial gastric juice, the change of concentration as shown in Figure 1(a). The concentration of gEGF remained basically stable. After 0.5, 1, 2, and 3 h of digestion of gEGF in the artificial intestinal juice, the change of gEGF concentration was shown in Figure 1(b). The concentration of gEGF first decreased significantly and then increased. The results showed that gEGF was rapidly degraded within the first hour, then rose to a certain extent and remain stable in the artificial intestinal juices. Growth performance The effect of dietary gEGF on growth performance is shown in Table 2. ADG (starter period) of 4, 6 and 12 ng/kg gEGF group was significantly increased (p < .05) than that of the control group. ADG (grower period, overall period) and ADFI (overall period) were significantly increased (p < .05) at 4 ng/kg gEGF in comparison with the control group. No significant influence of gEGF on FCR (starter period, grower period, overall period) was observed (p > .05). Biochemical indices in serum The effect of dietary gEGF on serum biochemical indicators is showed in Table 3. DAO concentration was significantly decreased (p ¼ .001) at 8 ng/kg dietary gEGF. No significant influence of serum GOT, GPT, GLU, UA, TP and ALB concentrations were observed among all groups. Antioxidant capacity in serum The effect of dietary gEGF on the antioxidant capacity of serum is shown in Table 4. CAT activity of dietary gEGF at 4 and 6 ng/kg groups had a higher (p < .05) performance than that of the control group, and T-AOC, GSH-PX activity were significantly increased (p < .05) while MDA concentration significantly decreased (p < .05) at 12 ng/kg group. Index of immune organs and immunoglobulins in serum The effect of dietary gEGF on index immune organs indices is showed in Table 5. Thymus index increased at 8 ng/kg gEGF supplementation (p < .05) in comparison with that of the control group, BF index significantly increased (p < .05) at 4, 6, 8 ng/kg. No significant difference in spleen index was observed between gEGF treatments and the control group (p > .05). The effect of dietary gEGF on serum immunoglobulins is showed in Table 6. There were significantly higher serum IgA and IgM concentrations with gEGF at 8 ng/kg compared with those of the control group (p < .05). However, there was no significant difference in serum IgG concentrations (p > .05). Intestinal morphology The effect of dietary gEGF on morphometric traits of the small intestine is showed in Table 7. CD (duodenum) of gEGF supplementation at 6 and 8 ng/kg .65 a-b Means with different superscripts in the same row indicate significant differences (p< .05) & (p< .01). Abbreviations: CAT: catalase; T-SOD: total superoxide dismutase; MDA: malondialdehyde; GSH-PX: glutathione peroxidase; T-AOC: total antioxidant capacity; control: basic diet; SEM: standard error of means; gEGF: gallus epidermal growth factor. significantly decreased (p < .05), and V/C (duodenum) was significantly increased at 8 ng/kg group (p < .05). VH (jejunum) at 4, 8 and 12 ng/kg groups were significantly higher (p < .05) than that of the control group while CD (jejunum) at 6 and 8 ng/kg groups was lower (p < .05) than that of the control group. In addition, CD (jejunum) was increased (p < .05) at gEGF of 8 ng/kg in comparison with the control group. The value of V/C (ileum) was significantly increased (p < .05) of gEGF supplementation at 4, 6 and 8 ng/kg, and CD (ileum) was sharply lower (p < .01) at 4, 6 and 12 ng/kg groups as compared with the control group. Discussion The stability of gEGF in vitro digestion test EGF is an important biologically active substance in the organism. It can promote the growth of the gastrointestinal tract, improve the secretion of gastric acid and the activity level of intestinal enzymes, and protect gut mucosa. The biological activity of gEGF was very important. EGF is usually completely absorbed and utilised through pinocytosis in the gastrointestinal tract of young animals. A study reported that EGF of goat milk decreased rapidly within 1 h of digestion in artificial gastrointestinal juice, and was basically stable after 1 h of digestion, and was more stable than recombinant pure EGF (Yun et al. 2016). Some studies had shown that EGF existed stably in the gastric juice of mice and humans, and degraded to a variable range in the intestinal juice (Britton et al. 1988). These findings were consistent with previous findings (Cohen and Savage 1974). Growth performance EGF is a multifunctional polypeptide that regulates cell proliferation, differentiation, metabolism, survival and apoptosis, which could be used to solve problems in the poultry industry such as increasing the growth performance of commercial broilers or enhancing resistance to diseases under adverse conditions. Very few studies have focussed on gEGF despite the availability of numerous studies on mammalian EGF. Wang, Duan, et al. 2014 reported that ADG and ADFI of early-weaned piglets feeding with rpEGF-LL (Recombinant Porcine Epidermal Growth Factor by Lactobacillus lactis) were significantly increased, and had no significant effect on FCR as compared with the control group. Levesque (Levesque et al. 2018) found that feeding early-weaned piglets with diets containing EGF-PP (P. pastoris fermentation supernatant with EGF) increased (p < .05) ADG and BW. On the contrary, Wang et al. 2019 has been demonstrated that dietary supplementation of EGF in early-weaned piglets has no significant effect on ADG, ADFI or FCR. However, broilers fed with gEGF-LL (Recombinant Gallus Epidermal Growth Factor by Lactobacillus lactis), whose BW, ADG and ADFI were significantly increased, and FCR was decreased as compared with the control group (Zhou et al. 2021). In the current study, results showed that dietary gEGF increased ADG of the starter period and ADFI of the grower period of broilers. These findings were basically consistent with previous studies. In addition, gEGF had no significant effect on FCR, which may be related to the amount and source of EGF, and further experiments are needed to draw more accurate conclusions. Biochemical indices in serum The content of serum TP and ALB reflect the nutritional status and protein metabolism level of the body. The increase of serum TP and ALB content can promote the utilisation of feed, improve the absorption of nutrients in the body, and reduce feed consumption (Zheng 2019). Serum GOT and GPT are indicators of liver function. When the liver or muscles are damaged, serum GOT and GPT activity will increase. Dietary supplemented with 400 lg/kg EGF significantly reduced serum GOT and GPT (Zhu 2018). In contrast, our results found that gEGF had no significant effect on the value of GOT or GPT, although GPT decreased by 40% and 30% when adding 8 and 12 ng/kg EGF, respectively. UA is the final product of purine metabolism. Our results showed that dietary gEGF supplementation tended to increase serum UA of broilers. EGF cannot be directly absorbed, but mainly acts on the intestinal mucosa of animals. Therefore, dietary EGF may affect purine metabolism by regulating intestinal digestion, absorption or metabolic function. There are few studies of the effect of EGF on UA production in the body. It is necessary to further clarify the internal mechanism of the effect of dietary EGF on uric acid production in subsequent studies. DAO is an intracellular enzyme, mainly produced in the mucosa of the small intestine and exists in the cytoplasm (Thompson et al. 1987). When the intestinal barrier is damaged, the permeability of the intestinal barrier is increased and large amounts of DAOs are released into the blood (Cheng YF et al. 2019). Therefore, serum DAO activity can be used as a marker to monitor intestinal permeability and barrier injury. In this study, we observed that gEGF could reduce the activity of DAO, especially under 8 ng/kg. These results suggested that dietary gEGF supplementation had no negative effect on internal environment homeostasis and can alleviate intestinal barrier function damage by decreasing intestinal permeability and maintaining intestinal morphology in broilers external stress. Antioxidant capacity in serum With the industrialisation development of broilers breeding, broilers are exposed to various external pressures and caused excessive production of reactive oxygen species (ROS) and disturbing the redox balance in the chicken body, leading to oxidative stress (Lee et al. 2019). The antioxidant system maintains a balance between the generation and elimination of ROS through the activity of antioxidant enzymes (including GSH, SOD and CAT) . T-AOC is the overall evaluation index of the antioxidant function of the animal body; T-SOD is considered to be the first line of defense of the antioxidant system. It can catalyse the conversion of superoxide anion radicals (O 2-) into hydrogen peroxide. The latter reacting with CAT produces water and oxygen, which is very important to the body's scavenging free radicals; GPX can specifically catalyse glutathione to reduce hydrogen peroxide oxidative damage (Ramay and Yalçı n 2020). The MDA content reflects the degree of lipid peroxidation and indirectly reflects the degree of body damage (Wei et al. 2005). Our current study found that the basal diet supplemented with EGF had an increasing trend on the serum CAT, T-SOD, T-AOC and GSH-PX activity, meanwhile having a decreasing trend on the serum MDA concentrations of broilers. Compared with the control group, at 4 and 6 ng/kg groups, dietary gEGF significantly increased CAT activity; at 12 ng/kg group, gEGF significantly increased T-AOC and GSH-PX activity and decreased the MDA concentrations. The antioxidant properties of EGF in broilers are rarely studied. EGF significantly decreased the LPSinduced induction of apoptosis, dehydrogenase (LDH) release and MDA production; upregulated antioxidant enzyme secretion and genes expression of T-AOC, SOD, CAT and GSH-PX to alleviate oxidative injury (Tang et al. 2018). EGF prevented hydrogen peroxideinduced tight junctions and adhesive junctions in the bile duct epithelium (Guntaka et al. 2011). Under highdensity feeding conditions, the antioxidant capacity of poultry cells is weakened, and they are more susceptible to oxidative attack and damage . Therefore, we can preliminarily infer that gEGF can improve the oxidative damage caused by external conditions and scavenge excess free radicals to promote the health and growth of broilers. The specific regulation mechanism needs further researches. Index of immune organs and immunoglobulins in serum The thymus, bursa of Fabricius and spleen are important immune organs for broilers. Generally, cell growth, development and division promote the weight of animal immune organs, and the weight of immune organs reflects the level of immune function . The elevation of the immune organ indexes indicated the maturation of the immune system (Shi 2020). We found that compared with the control group, dietary supplementation with gEGF significantly increased the index of BF; the thymus index was higher at 8 ng/kg EGF group than that of the control group; gEGF exhibited no significant difference on the index of the spleen. Yu's research showed that the thymus index and spleen index increased significantly after feeding broilers with EGF for two weeks (Zhou et al. 2021). The finding was consistent with our results. This suggested that EGF can promote the development of immune organs in broilers. IgG, IgA and IgM are considered important components of the animal humoral immune system, which can directly reflect the immune status of the body (Song 2021). IgG participates in humoral immunity, phagocytosis, agglutination and precipitation. IgA binds to the antigens and clears them without developing an inflammatory response. IgM can regulate and sterilise in the early stage of pathogen infection. Furthermore, birds being stimulated by antigens first produced IgM and then produce IgG and IgA by helper T cells and cytokines mediated conversion (Yang 2007). In our study, gEGF supplementation significantly increased the contents of serum IgA and IgM at the 8 ng/kg group. Similarly, Wang found that serum IgA, IgG and IgM levels increased significantly when feeding early-weaned piglets with rpEGF expressed by Saccharomyces cerevisiae (Wang, Zhou et al. 2015). This suggested that gEGF can improve the immune function of the body and enhance the immunity of the body to a certain extent. In addition, our previous results showed that gEGF supplementation increased growth performance, immune index and antioxidant capacity significantly. These results suggested that gEGF may improve the growth performance of broilers by improving the immune function and antioxidant capacity. Intestinal morphology The small intestine is important for nutrient absorption, the VH and CD are the key parameters for evaluating intestinal morphology (Missotten et al. 2013). EGF is a mitogen, promotes cell proliferation and differentiation, and plays an important role in the development of the gastrointestinal tract of young animals (Cheung et al. 2009). The VH of the duodenum and jejunum was significantly increased fed with rpEGF in weaned piglets, but EGF had no significant effect on CD and VH/CD (Bedford et al. 2015). The VH/CD of PEDV (porcine epidemic diarrhoea virus) þ EGF piglets was significantly higher than only PEDV piglets (Jung et al. 2008). Compared with the control group, VH of the duodenum was significantly higher in the EGF-LL group, the EGF-LL, and rEGF group than that of the control group; no significant influence on CD was observed among all groups; the EGF-LL treatment tended to increase the weight of the intestine (Kang et al. 2010). Furthermore, the number of intestinal crypt epithelial cells in PEDV þ EGF piglets was more than in PEDV only piglets and promoted recovery from atrophic enteritis in PEDV-infected piglets (Jung et al. 2008). Exogenous EGF increased the number of proliferating cell nuclear antigen-positive cells and their mRNA expression, which indicated that the production of pig intestinal crypt cells was increased (Cheung et al. 2009;Kang et al. 2010). Moreover, the number of apoptotic bodies in crypt and villi was reduced with systemic EGF treatment (O'Brien et al. 2002). Our study showed that the CD was decreased and the VH/CD and jejunal VH of the small intestine was increased with dietary gEGF in broilers. The results were consistent with previous studies. Furthermore, EGF was mediated by EGF receptor (EGFR), which was located on the intestinal enterocytes microvilli and basolateral membranes (Tang et al. 2016). The relative content of PCNA in the villi of small intestine with EGF-LL was lower than the control group (Huai 2016). In addition, our study showed that the overall feeding effect of the 12 ng/kg group is not as good as that of the 8 ng/kg group. This is consistent with previous research results (Lee DN et al. 2008;Wang LX et al. 2020). We confirmed that dietary EGF stimulated immunity, intestinal morphology and growth performance partly through EGFR and Wnt/b-catenin signalling. The study has shown that EGFR had different fates through different internalisation pathways under different concentrations of EGF (von Zastrow and Sorkin 2007;Sigismund et al. 2008), one was circulated to the cell surface to continue to play a role (clathrinmediated endocytosis, CME); the other was further transported to the late endosomes and lysosomes for degradation (non-clathrin endocytosis, NCE) (Mukhopadhyay and Riezman 2007). Compared with a low EGF concentration (1.5 ng/ml, when CME was predominant and 30% of the internalised ligand was degraded), at a high EGF concentration (100 ng/ml), 55% of the ligand was degraded (60% and 40% EGF enters through the CME and NCE, respectively) (Wei et al. 2021). So, we speculated that dietary supplementation with suitable gEGF can promote intestinal morphology and improve the function of the damaged intestine by stimulating crypt and villi cell proliferation. Conclusion In summary, the current study suggested that dietary gEGF was stable in feeds and was beneficial to growth performance, immune function and antioxidant capacity of broilers. In addition, gEGF was great of benefit to improve intestinal morphology by increasing VH and reducing CD of broilers. Our findings supported the application of gEGF in broilers' diet, especially at the starter period, and provided a scientific basis for the use of gEGF as feed additives in the future. In the current study, the suitable dosage of gEGF in broiler diet is 8 ng/kg. Ethical approval The present study was approved by the Animal Care and Welfare Committee and the Scientific Ethical Committee of the Zhejiang University (No. ZJU2013105002). Disclosure statement No potential conflict of interest was reported by the author(s).
Calcium/calmodulin‐dependent kinase II and memory destabilization: a new role in memory maintenance Abstract In this review, we discuss the poorly explored role of calcium/calmodulin‐dependent protein kinase II (CaMKII) in memory maintenance, and its influence on memory destabilization. After a brief review on CaMKII and memory destabilization, we present critical pieces of evidence suggesting that CaMKII activity increases retrieval‐induced memory destabilization. We then proceed to propose two potential molecular pathways to explain the association between CaMKII activation and increased memory destabilization. This review will pinpoint gaps in our knowledge and discuss some ‘controversial’ observations, establishing the basis for new experiments on the role of CaMKII in memory reconsolidation. The role of CaMKII in memory destabilization is of great clinical relevance. Still, because of the lack of scientific literature on the subject, more basic science research is necessary to pursue this pathway as a clinical tool. and b CaMKII (Bennett et al. 1983;Tobimatsu and Fujisawa 1989;Peng et al. 2004). These isoforms are usually associated with each other, creating a holoenzyme composed of 12 CaMKII subunits organized into two hexameric rings (Kolodziej et al. 2000;Hoelz et al. 2003;Rosenberg et al. 2005). The 12 subunits are primarily composed of a and b CaMKII heteromers, but homomers consisting of only aCaMKII have been observed (Bronstein et al. 1988). The enzyme is organized into a complex of subunits, thereby facilitating the occurrence of autophosphorylation. Examples of autophosphorylation sites of CaMKII are threonine 305 (T305) and threonine 306 (T306). Phosphorylation of these sites are believed to be inhibitory because of blocking of the CaM binding site, causing CaMKII to translocate out of the PSD area and decreasing long-term potentiation (LTP) and learning Shen et al. 2000;Elgersma et al. 2002). The most studied autophosphorylation site of CaMKII isoforms is threonine 286 (T286) for aCaMKII and threonine 287 (T287) for bCaMKII. Phosphorylation at the T286/287 sites occurs between neighboring subunits within the same holoenzyme, and requires binding of CaM to both of the subunits involved (Hanson et al. 1994;Mukherji and Soderling 1994;Rich and Schulman 1998). Phosphorylation at T286/287 allows CaMKII to remain in an active state, even in the absence of CaM, serving as an example of a CaM-independent state of activation (Miller et al. 1988;Hanson et al. 1994;Irvine et al. 2006). Autophosphorylation at T286 also enhances CaM complex's binding affinity for the enzyme by 1000fold, with an increase in release time from less than a second to hundreds of seconds (Meyer et al. 1992;. T286/287 autophosphorylation also changes CaMKII binding affinity for other molecules. For example, it increases the holoenzyme's binding affinity to the N-methyl-D-aspartate receptor (NMDAR) (Bayer et al. 2001). Because of its ability to switch from a CaM-dependent to a CaM-independent state of activation by T286/287 autophosphorylation (bistability), CaMKII has been suggested to act as a memory molecule, preserving 'memories' of strong calcium signals (Lisman 1994). T286A-mutant mice lack the ability to autophosphorylate at the T286 site, and have one of the most severe spatial learning deficits described in a mutant mouse (Giese et al. 1998;Need and Giese 2003). T286A mutation also blocks the induction of NMDAR-dependent LTP at excitatory hippocampal CA1 synapses (Giese et al. 1998;Cooke et al. 2006). Indeed, the role of CaMKII in learning is widely accepted; however, its role as a memory molecule is still a matter of debate Coultrap and Bayer 2012;Sanhueza and Lisman 2013). Buard et al. (2010) has shown that blocking CaMKII activity via systemic injection of the CaMKII inhibitor, tatCN21, prior to performing a contextual fear long-term memory test, but after conditioning, had no effect on memory storage. Even T286A-mutant mice can learn and maintain contextual and cued fear memory, if they were conditioned using extended protocols (Irvine et al. 2005(Irvine et al. , 2011. Although aCaMKII T286 autophosphorylation is required for LTP induction in pyramidal CA1 neurons (Giese et al. 1998;Cooke et al. 2006), it can also induce long-term depression in the same cells (Marsden et al. 2007;Mockett et al. 2011) and it is not necessary for LTP induction in dentate gyrus granule cells (Cooke et al. 2006;Wu et al. 2006). Moreover, various authors have observed that LTP induction results in a transient increase in CaMKII autonomous activity, lasting for only a few minutes (Lengyel et al. 2004;Lee et al. 2009;Fujii et al. 2013). Nonetheless, CaMKII has been shown to be important for memory extinction. Prolonged and repetitive re-exposure to the conditioned stimulus without the unconditioned stimulus leads to a gradual weakening of the conditioned response, called memory extinction. Memory extinction is the learning of new environmental conditions that suppresses the previously learned conditioned response (Pavlov 1927;Eisenberg et al. 2003;Pedreira and Maldonado 2003;Myers and Davis 2007;Quirk and Mueller 2008;Pape and Pare 2010). The partial reduction of CaMKII autophosphorylation in heterozygous T286A mutants impairs extinction of contextual fear memory (Kimura et al. 2008). Furthermore, blocking of hippocampal CaMKII kinase activity impairs memory extinction (Szapiro et al. 2003). Inhibition of CaMKII activity by intrahippocampal injection of autocamtide-2related inhibitory peptide (AIP) blocks the facilitation of memory extinction, which results from exposure to a novel stimulus (de Carvalho Myskiw et al. 2014). Therefore, CaMKII may play a role in memory maintenance as a biological substrate of memory extinction. Here, we propose a different, novel and unexplored role for CaMKII in memory. We will avoid the 'traditional' discussion of CaMKII as a learning or memory molecule, in addition to its role in memory extinction. Instead, we will explore a different role for CaMKII in memory maintenance. CaMKII's role in memory destabilization, an important step of retrieval-induced memory reconsolidation. Memory reconsolidation: destabilization and restabilization The first evidence of memory reconsolidation was presented in Misanin et al. (1968), where an amnesic effect was induced by electroconvulsive shock 24 h after fear conditioning training. Such amnesic effect could only be achieved if the electroconvulsive shock was presented after re-exposure to the conditioned stimulus. In other words, the associative memory between a neutral conditioned stimulus and an unconditioned stimulus was lost after electroconvulsive shock, only if the memory was retrieved (Misanin et al. 1968). This observation challenged the long prevailing theory that memories once consolidated would no longer be labile. Recently, memory reconsolidation has been shown to be an important process for the maintenance and further strengthening of a memory (Lee 2008;Fukushima et al. 2014). Retrieval-induced reconsolidation can destabilize a memory, which involves proteasome-dependent degradation of synaptic proteins, followed by restabilization of the memory, a protein synthesis-dependent process ( Fig. 1) (Kelly et al. 2003;Nader 2003;Lee et al. 2004Lee et al. , 2008Lee 2008). A previously consolidated memory is impaired by pharmacological blocking of protein synthesis after the retrieval process (Nader et al. 2000). Blocking the proteasome system with clasto-lactacystin-b-lactone, a specific, cell permeable and irreversible inhibitor of the catalytic proteasome subunit 20S, reverts the memory impairment effect elicited by blocking protein synthesis . Henceforth, protein synthesis in memory reconsolidation is important to revert protein degradation-dependent memory destabilization. The ubiquitin proteasome system (UPS) is the main mechanism for protein catabolism in mammalian cells and works by targeting proteins via ubiquitination with posterior degradation by the 26S proteasome enzyme (Varshavsky et al. 2000;Leestemaker and Ovaa 2017). Whether or not memory destabilization is necessary for memory maintenance after retrieval is still a matter of debate. Pharmacological blocking of the catalytic subunit 20S, immediately after retrieval, has been shown to impair memory maintenance (Artinian et al. 2008). However, as discussed by Artinian et al. (2008), it is unclear if protein degradation is solely involved in the destabilization process. The UPS might be working in memory reconsolidation by degradation of memory suppressor proteins, thereby facilitating memory restabilization. However, Lee et al. (2008) have reported that retrieval-induced protein degradation by the UPS system is in fact related to memory destabilization. These authors have observed no effect on memory maintenance by pharmacological inhibition of the UPS system, after retrieval. Furthermore, Lee et al. (2008) have also shown that inhibition of protein degradation increases memory maintenance by inhibiting memory extinction. The apparent contradiction between the observations of Artinian et al. (2008) and Lee et al. (2008) could be a consequence of the different experimental designs. Artinian et al. (2008) and Lee et al. (2008) studies differ in terms of the hippocampal area where protein degradation was inhibited; the CA3 region was targeted in the former and CA1 region was targeted the latter. The CA1 and CA3 areas might have a different dependence for protein degradation after retrieval. Both areas have been shown to play distinct roles in memory maintenance (Ji and Maren 2008;Langston et al. 2010) and have different proteomic profiles (Gozal et al. 2002). Both articles also present results from different behavioral paradigms. While Artinian et al. (2008) used the Morris water maze (MWM), Lee et al. (2008) utilized the contextual fear conditioning paradigm. The two behavioral paradigms are known to produce different phenotypes with animals harboring the same genetic mutation (Sterneck et al. 1998), or even in animals exposed to the same pharmacological intervention (Shuman et al. 2009). The repetitive training spread throughout many days, which is required by the MWM, might create, for example, a more flexible memory that becomes more sensible to changes in the UPS function Fig. 1 Schematic representation of how memory reconsolidation works. Retrieval of a memory by the presentation of the conditioned and/or unconditioned stimulus initiates the reconsolidation process. During reconsolidation, memory-related proteins are degraded, which is called memory destabilization (Lee et al. 2004. Concurrent with reconsolidation, memory is restabilized by protein synthesis (Nader et al. 2000). Although it remains clear that protein synthesis is necessary to compensate for protein degradation, it is unclear if one precedes the other or if memory destabilization and restabilization happen simultaneously. Nonetheless, the result is maintenance of the memory with the possibility of alterations in the memory during the reconsolidation process (Lee 2008). Although the NMDAR is involved in memory destabilization and restabilization, it has been reported that isoform specificity can be found for these two different steps of memory reconsolidation (Milton et al. 2013). because of the continuous processes of destabilizationrestabilization during training. Nevertheless, the necessity for and the roles played by destabilization in memory maintenance after retrieval are still questions to be answered. For example, the identification of proteins targeted to degradation during the destabilization process is still poorly studied. First efforts have identified proteins involved in translational control, like MOV10 (Jarome et al. 2011), and synaptic structure, like Shank Jarome et al. 2011). It is possible that memory destabilization plays different roles depending on the area of the brain being studied, and protein degradation could be relevant for both memory destabilization and restabilization. Evidences for CaMKII regulation of memory destabilization One of the strongest pieces of evidence for the role of CaMKII in memory destabilization are the observations of Cao et al. (2008). By over-expressing a transgenic form of aCaMKII that has a different ATP-binding site structure, referred to as the aCaMKII-F89G transgene, Cao et al. (2008) could increase CaMKII levels and activity, as well as specifically block aCaMKII-F89G activity. The authors observed that if aCaMKII activity was increased at the time of retrieval of cued or contextual fear memory, the memory was specifically erased. There was no spontaneous recovery, indicating that this was a true memory erasure and not an enhancement of extinction. Cao et al. (2008) suggested that the memory erasure phenotype could be related to an increase in reconsolidation-induced protein degradation, citing Lee et al. (2008) published in the same year. A more direct link between CaMKII and UPS protein degradation during destabilization was the observation made in Jarome et al. (2016). The authors reported that administration of AIP, an inhibitor of CaMKII, in the amygdala did not affect fear memory, but it rescued retrieval-dependent memory impairment, which was induced by blocking protein synthesis. This is a characteristic phenotype observed after blocking memory destabilization, supporting the role of CaMKII in memory destabilization. Additionally, AIP treatment stopped the retrieval-induced proteasome activity, in vitro and in vivo, and impaired retrieval-induced phosphorylation of the proteasome subunit Rpt6 on serine 120 in synaptosomes (Jarome et al. 2016). This suggested that CaMKII activity, at the time of retrieval, regulates protein degradation at the synapse. A less substantial piece of evidence for the role of CaMKII activation inducing memory destabilization can be conjectured from the results reported by Rossetti et al. (2017). Rossetti et al. (2017) showed that viral-induced expression of aCaMKII T286D/T305A/T306A gene (a hyperactive form of aCaMKII) in the hippocampus, blocks previously learned place-avoidance behavior. It is plausible to propose that the expression of a highly active form of CaMKII increases memory destabilization, resulting in the memory impairment phenotype observed in Rossetti et al. (2017), which is in agreement with Cao et al. 2008;. Since the viral vector was injected 3 days after the first memory test, any CaMKIIinduced memory destabilization enhancement likely occurred during the second memory test when the phenotype was observed. However, a clear link between CaMKII activation and retrieval-induced memory destabilization is difficult to establish because of the behavioral protocol used by Rossetti et al. (2017). The conditioned place-avoidance task used demands repetitive trials to be conducted both during training and memory test, prior to viral injection. In each of these trials, memory was retrieved, which probably initiated memory destabilization/reconsolidation throughout the training and memory test sessions. Furthermore, in this behavioral paradigm, the unconditioned stimulus (shock) is present during the memory test, allowing for continued conditioning of the animal. Henceforth, this model might be too complex to deduce whether a memory impairment is retrieval dependent or independent. Rossetti et al. (2017) presents another plausible interpretation of the memory impairment resulting from the aCaMKII T286D/T305A/T306A mutation. Based on the memory engram theory (Tonegawa et al. 2015;Josselyn et al. 2017), the authors proposed that excessive CaMKII activity from aCaMKII T286D/T305A/ T306A mutation resulted in association of the conditioned stimulus to multiple and unspecific synaptic/neuronal pathways. Consequently, the memory was lost since the memory engram was also lost. It is our opinion that the data collected by Rossetti et al. (2017) does not enable one to definitively determine the memory processes affected by the mutation. Even though an increase in memory destabilization or an impairment of the memory engram are possible explanations, with the data available it is impossible to determine the cause of the memory impairment. We have recently observed that dorso-hippocampal knockdown of CaMKII endogenous inhibitor, CaMK2N1, results in a retrieval-dependent memory impairment (Vigil et al. 2017). CaMK2N1 is a specific endogenous inhibitor of CaMKII kinase activity (Chang et al. 1998). In our experiments, CaMK2N1 knockdown animals presented normal freezing scores in a first contextual fear memory test, but lower freezing scores in a subsequent test. This retrieval-induced memory impairment can be interpreted as an increase in memory destabilization. We have also observed that 2 h after contextual fear memory retrieval, there was a decrease in aCaMKII T286 phosphorylation and such decrease was dependent on CaMK2N1 expression. Additionally, contextual fear memory retrieval promotes CaMK2N1 expression in dorsal hippocampi (Vigil et al. 2017). If CaMKII activation induces memory destabilization, CaMK2N1 expression could be induced by memory retrieval to control the destabilization process. This could explain why knockdown of CaMK2N1 results in retrieval-dependent memory impairment. It is unlikely that this memory impairment was the result of an enhancement in memory extinction, as no extinction was observed in the control group. Extinction and reconsolidation seem to be exclusive processes (Merlo et al. 2014). It is important to notice that although CaMK2N1 was knocked-down before conditioning, a memory impairment was observed only in the second memory test. This is different from Cao et al. (2008), who found that increased CaMKII activation impairs memory, even during the first test. This comparison leads to two important conclusions. First, memory reconsolidation is a process that extends beyond the retrieval session. Second, under physiological conditions, CaMK2N1 is important when it comes to reducing CaMKII-induced memory destabilization after, but not during, memory retrieval. Therefore, CaMKIIinduced memory destabilization likely starts during memory retrieval, as observed by Cao et al. (2008), and needs to be controlled by CaMK2N1 after the memory is retrieved (Vigil et al. 2017). Corroborating with a role for CaMKII in memory destabilization, Rich et al. (2016) performed a phosphoproteomic study of basolateral amygdala samples from rats subjected to either extinction or reconsolidation of previously learned cocaine seeking behavior. aCaMKII phosphorylation at S331, a largely understudied site, was shown to decrease when memory was retrieved and increase after extinction. In the same article, Rich et al. (2016) showed that S331 phosphorylation reduces aCaMKII kinase activity. Thus, reduction in S331 phosphorylation after memory retrieval is thought to increase CaMKII activity, possibly initiating CaMKII-induced memory destabilization. If this were the case, memory retrieval would first reduce CaMKII S331 phosphorylation, resulting in memory destabilization followed by a later increase in CaMK2N1 expression in order to stop the destabilization process. Unfortunately, the authors did not test behavioral phenotypes induced by specific manipulation of S331 phosphorylation in vivo. Consequently, the role of CaMKII S331 phosphorylation in memory destabilization is still a hypothesis lacking substantial evidence. The memory phenotypes of the articles cited in this section are possible observations of changes in memory destabilization by manipulation of CaMKII, and are summarized in Table 1. Based on these observations, we can raise the hypothesis that CaMKII activation during and after memory retrieval induces memory destabilization. But, it remains unclear which molecular pathways are involved in CaMKIIinduced memory destabilization. Here, we propose two possible mechanisms by which CaMKII can regulate memory destabilization. CaMKII, memory destabilization and GluN2B Memory destabilization was first associated with NMDAR activity by Ben Mamou et al. (2006). The authors show that pharmacological inhibition of NMDAR with intra-basolateral amygdala injection of ifenprodil or AP5 prevented the retrieval-dependent cued fear memory impairment that was induced by protein synthesis inhibition. That is, NMDAR inhibition prior to the memory retrieval session eliminated the necessity for memory restabilization, as memory destabilization was diminished. Using more specific pharmacological tools, Milton et al. (2013) have suggested that within the basolateral amygdala, the regulation of destabilization and restabilization are dissociated. While memory destabilization is regulated by activation of NMDAR subunit GluN2B, memory restabilization is regulated by NMDAR subunit GluN2A. Milton et al. (2013) did not observe any change in auditory fear memory after specific pharmacological inhibition of GluN2B activity, but GluN2B inhibition prevented the memory impairment induced by blocking protein synthesis after memory retrieval. Hence, GluN2B inhibition impairs fear memory destabilization. In contrast, injection of a GluN2Aprefferring antagonist, NVP-AAM077, reduces freezing behavior after reactivation much like protein synthesis inhibition. A GluN2B-induced memory destabilization was also later described by Crestani et al. (2015) and Ferrer Monti et al. (2016). Both used a distractor stimulus to erase memory in a memory retrieval-dependent matter. Crestani et al. (2015) used an air puff during retrieval as a distractor to induce contextual fear memory impairment. This impairment could be blocked by intra-CA1 injection of the GluN2B antagonist ifenprodil, prior to retrieval session. , on the other hand, tested if the exposure of a rat to a different environment would induce memory erasure, and it did not, confirming the specificity and necessity of memory retrieval for memory erasure. Therefore, both articles further support the existence of a GluN2B-induced memory destabilization mechanism. CaMKII is known to bind to the GluN2B subunit (Strack and Colbran 1998) in a process that regulates synaptic plasticity (Barria and Malinow 2005;Zhou et al. 2007) and is necessary for memory formation (Zhou et al. 2007;Halt et al. 2012;Stein et al. 2014). CaMKII binding to NMDAR increases CaMKII activity by facilitating CaMKII T286/ T287 autophosphorylation and inhibiting its dephosphorylation (Lisman and Raghavachari 2015). CaMKII complex can bind in to two different sites of GluN2B. One site is dependent on CaMKII's association with CaM (within residues 1120-1480), and the second binding site depends on CaMKII T286 phosphorylation (residues 839-1120) (Bayer et al. 2001(Bayer et al. , 2006. Moreover, the inhibition of CaMKII kinase activity by AIP treatment of hippocampalneuronal culture and hippocampal slices reduces GluN2B colocalization with PSD-95 within the synapses (Gardoni et al. 2009). Hence, it is possible that CaMKII activity increases the synaptic levels of GluN2B, increasing memory destabilization after memory retrieval. If CaMKII activity induces memory destabilization, one could predict that its inhibition is necessary for the maintenance of the memory after retrieval. In other words, excessive CaMKII activation might result in memory impairment because of excessive destabilization. Corroborating with this hypothesis, we have recently reported a contextual fear memory retrieval-induced hippocampal expression of CaMK2N1, and this expression was necessary for memory maintenance after retrieval (Vigil et al. 2017). CaMK2N1 is known to block CaMKII binding to GluN2B (Vest et al. 2007). Thus, retrieval-induced expression of CaMK2N1 could stop memory destabilization by blocking CaMKII interaction with GluN2B. Figure 2 is a schematic view of how CaMKII activity might induce memory destabilization via regulation of GluN2B synaptic levels and how this process would be stopped by retrievalinduced CaMK2N1 expression. To further test this hypothesis and to understand how GluN2B activity regulates memory destabilization, more experiments are necessary. CaMKII, memory destabilization and protein degradation Another possible mechanism of how CaMKII might play a role in memory destabilization is via regulation of UPSdependent protein degradation. It has been observed that post-retrieval inhibition of CaMKII stops retrieval-induced protein degradation and rescues memory impairment resulting from protein synthesis inhibition (Jarome et al. 2016). Autophosphorylation of T286 increases aCaMKII's affinity for the proteasome and promotes proteasome recruitment to the PSD (Bingol et al. 2010). CaMKII can also phosphorylate serine 120 of proteasome subunit Rpt6 and increase its activity (Djakovic et al. 2009;Jarome et al. 2013). The phosphorylation of Rpt6 seems to decrease synaptic strength by impairing miniature excitatory post-synaptic current (Djakovic et al. 2012). CaMKII also phosphorylates the protein cylindromatosis. Once phosphorylated, cylindromatosis is activated and facilitates proteasomal degradation of proteins by removing K63-linked polyubiquitins from targeted proteins (Thein et al. 2014). Thus, CaMKII activation increases protein degradation by incrementing proteasome activity, by anchoring it in the PSD area and by facilitating access to target proteins. This regulation of protein degradation by CaMKII is probable related to the retrieval-induced CaMKII-dependent proteasome activation reported by Jarome et al. (2016) (Figure 3). Furthermore, the retrieval-dependent memory impairment observed after transgenic CaMKII over-expression in Cao et al. (2008) experiments, could also be explained by uncontrolled activation of protein degradation. Nonetheless, a direct link between the memory maintenance impairment induced by aCaMKII-F89G over-expression and UPS protein degradation is still to be shown. Although the results of Jarome et al. (2016) clearly support the existence of a role of CaMKII in memory destabilization by regulation of UPS protein degradation, it does not establish a definitive role for CaMKII in memory destabilization. AIP, the inhibitor of CaMKII used, belongs to a family of CaMKII inhibitory peptides designed based on T286 autophosphorylation site of aCaMKII. These peptides are fragments of the T286 area, but with a substitution of the threonine to alanine (Hanson et al. 1989;Braun and Schulman 1995a;Ishida et al. 1995;Pellicena and Schulman 2014). The specificity of such substrate-based inhibitors of CaMKII is still a matter of debate. They have also been reported to inhibit protein kinase D1 (PKD1) (Backs et al. 2009) and protein kinase C (Smith et al. 1990;Hvalby et al. 1994). A more specific inhibitor of CaMKII has been used by Naskar et al. (2014), the inhibitory peptide CaMKIINtide. By inhibiting CaMKII with CaMKIINtide treatment in Lymnaea stagnalis snail model, Naskar et al. (2014) described a memory consolidation impairment that could be rescued by proteasome inhibition. CaMKIINtide treatment also inhibited aCaMKII T305 autophosphorylation and decreased the levels of the a-amino-3-hydroxy-5-methyl-4isoxazolepropionic acid receptor subunit GluA1. The GluA1 decrease was rescued by proteasome inhibition (Naskar et al. 2014). CaMKIINtide is derived from the endogenous inhibitor CaMK2N1, and so far, it has been shown to block CaMKII kinase activity specifically (Chang et al. 1998;Vest et al. 2007). CaMK2N1-derived peptides have also been reported to reduce levels of CaMKII at the synapse (Sanhueza et al. 2011), inhibit T305 autophosphorylation (Vest et al. 2007), block binding to Densin (Jiao et al. 2011) Fig. 2 This schematic representation shows that CaMKII regulates the levels of GluN2B-containing NMDAR in the synapse after retrieval, affecting the maintenance of a memory. This hypothesis could explain the observations of Vigil et al. (2017). Once memory is retrieved, CaMKII increases GluN2B localization within the synapse, starting a memory destabilization process. In normal (wild-type) animals, this process is stopped by expression of CaMK2N1, which inhibits CaMKII and blocks the GluN2B-induced memory destabilization. On the other hand, in Vigil et al. (2017), CaMK2N1 knockdown caused memory erasure because of excessive GluN2B-induced memory destabilization resulting from the uncontrolled CaMKII activity and consequent increase in synaptic levels of GluN2B. GluN2B increase could be related to anchoring of extrasynaptic GluN2B in the post-synaptic density (PSD) and/or decrease in GluN2B degradation. The mechanism is still unknown. Fig. 3 Schematic representation of how CaMKII can regulate ubiquitin proteasome system (UPS) activity and increase memory destabilization by increasing protein degradation. Calcium coming from open NMDAR binds with calmodulin creating the CaM complex that activates CaMKII. Once active, CaMKII autophosphorylates threonine 286, further increasing its activity. Active CaMKII phosphorylates cylindromatosis (CYLD), which activates this enzyme. Active CYLD removes K63-linked polyubiquitins from proteins, targeting them for degradation. CaMKII also phosphorylates proteasome subunit Rpt6 on serine 120, increasing its activation. Active CaMKII also increases proteasome localization in the PSD. Such CaMKII/UPS pathway can help explain the behavioral phenotypes observed by Jarome et al. (2016) and Cao et al. (2008). Additionally, it is in accordance with the retrieval-induced decrease in Shank levels, observed in Lee et al. (2008), and GluA1 (Vigil et al. 2017) and decrease clustering of CaMKII in the dendrites (Tao-Cheng et al. 2013). Although the study in Naskar et al. (2014) has used a more specific tool, they tested the role of CaMKII regulation for protein degradation in memory consolidation. Nevertheless, like reconsolidation, consolidation also induces a wave of UPS-dependent protein degradation (Lopez-Salon et al. 2001;Artinian et al. 2008;Jarome et al. 2011;Jarome and Helmstetter 2014). Similar mechanisms might be used in both consolidation-and reconsolidation-induced protein degradation. Conclusion The role of CaMKII in memory maintenance has always been a matter of debate (Lisman 1994;Irvine et al. 2005;Buard et al. 2010;Lucchesi et al. 2011;Sanhueza and Lisman 2013;Rossetti et al. 2017). Here, we have gathered pieces of evidence suggesting that CaMKII may play a role in reconsolidation-induced memory destabilization, where CaMKII activation facilitates memory destabilization after retrieval. Cao et al. (2008) presents the first evidence for CaMKII-induced memory destabilization. Jarome et al. (2016) established the most direct link between CaMKII and memory destabilization, identifying protein degradation as a molecular pathway involved. Vigil et al. (2017) supports the role of CaMK2N1 as a physiological mechanism by which CaMKII-induced memory destabilization can be controlled. Additionally, reduction in CaMKII S331 phosphorylation could be responsible for initiating CaMKIIinduced memory destabilization (Rich et al. 2016). Finally, Rossetti et al. (2017) also observed that an increase in hippocampal CaMKII activity could lead to memory impairment. Although these observations suggest that CaMKII activation induces memory destabilization, none of these observations provides definitive evidence. Jarome et al. (2016) uses a pharmacological tool that is limited by its unspecific activity. Vigil et al. (2017) and Cao et al. (2008) report the occurrence of a retrieval-induced memory erasure, but lack the direct link with a biological marker of memory destabilization. Example of these markers would be changes in S120 Rpt6 proteasome phosphorylation (Djakovic et al. 2009(Djakovic et al. , 2012Jarome et al. 2013Jarome et al. , 2016 and decrease in the levels of MOV10 (Jarome et al. 2011) or in the synaptic levels of Shank (Lee 2008;Jarome et al. 2011). Rich et al. (2016) failed to study any behavioral phenotype resulting from specific manipulation of S331 phosphorylation. The retrieval dependence of the behavioral phenotype observed by Rossetti et al. (2017) was not tested. Consequently, experiments employing refined specific tools to manipulate and quantify memory destabilization and CaMKII activity, levels and localization are necessary. Here, we propose two possible mechanisms by which CaMKII may regulate memory destabilization. It is possible that CaMKII controls memory destabilization via regulation of synaptic levels of GluN2B and/or via the regulation of protein degradation in the synapse. These two mechanisms can also be linked or interact with one another. The UPS activity pathway is a more direct link between CaMKII and memory destabilization, and has a larger body of evidence supporting it. The CaMKII/GluN2B pathway proposed here has never been tested and lacks the essential understanding of how GluN2B regulates memory destabilization. Aside from the involvement of Ca 2+ influx, which is mediated by L-type voltage-gated calcium channels (Crestani et al. 2015), not much is known about the mechanism. The hypothesis that memory destabilization is induced via a CaMKII-dependent mechanism is not without apparent controversy. Da Silva et al. (2013) advocates a role for CaMKII in reconsolidation, more specifically, in the restabilization process. Da Silva et al. (2013) observed that hippocampal CaMKII inhibition by AIP after spatial memory retrieval induces memory impairment, which was rescued by inhibiting protein degradation. This memory impairment phenotype was time-dependent, not present 24 h after AIP treatment but present 5 days after. So, the phenotype observed by Da Silva et al. (2013) was interpreted as an indication that CaMKII activity is necessary for memory restabilization. The AIP-induced behavioral phenotype reported by Da Silva et al. (2013) and Jarome et al. (2016) are quite different from each other. However, if we consider that CaMKII might be important in both memory destabilization and restabilization, we eliminate the controversy between these two different observations. While Jarome's manipulation of CaMKII could have affected memory destabilization, Da Silva's might have changed memory restabilization. It is also important to consider that Da Silva et al. (2013) uses a different behavioral paradigm, the MWM. The MWM paradigm is not the most conventional paradigm used to test memory reconsolidation, as memory formation requires several training trials spread throughout various days of training. Therefore, memory destabilization and restabilization will occur during training, making it impossible to confirm the retrieval dependence of the phenotype by using a memory retrieval free group. Still, the MWM is a very useful and important paradigm to test different aspects of memory and learning. Similar to Da Silva et al. (2013), Rich et al. (2016) also observed that intra-basolateral amygdala inhibition of CaM-KII, after memory reactivation, impaired cocaine cued memory reconsolidation. CaMKII inhibition reduced the cocaine-seeking behavior induced by presentation of previously paired stimulus (3 tone-light stimulation). Nevertheless, for inhibition of CaMKII, Rich et al. (2016) applied bilateralinjections of KN-93 or KN-62, which have been shown to be non-specific inhibitors of CaMKII. For instance, they also inhibit the kinase activity of CaM-dependent protein kinase IV (Redondo et al. 2010), 'calmodulin kinase-like vesicle-associated' (Mochizuki et al. 1993) and others (Wayman et al. 2008). It is our opinion that CaMKII likely plays a role in memory destabilization. Consequently, CaMKII plays a role in memory maintenance, not as a 'memory molecule', but rather as biological substrate of memory reconsolidation. If this is the case, understanding how CaMKII regulates retrieval-induced memory destabilization could have an enormous impact on the treatment of posttraumatic stress disorder and addiction. We also do not refute or discard the notion that CaMKII may play a role in memory maintenance via other mechanisms like extinction (Szapiro et al. 2003;Kimura et al. 2008;de Carvalho Myskiw et al. 2014), memory restabilization (Da Silva et al. 2013) or even as a memory molecule (Rossetti et al. 2017). More studies are necessary to properly dissect the roles of CaMKII on memory maintenance. It is of paramount importance to include a retrieval free group in order to test the dependence of any memory phenotype to memory retrieval. The use of gene therapy treatments to knockdown, knockin or knockout specific genes can yield rich observations on the functions of CaMKII in memory maintenance. Pharmacological tools derived from the endogenous inhibitor CaMK2N1, like CaMKIINtide (Chang et al. 1998), are preferable because of specificity. The CaMKIINtide has been fused to the trans-acting activator of transcription (tat) domain, increasing cell penetration and creating the 21-amino acid peptide, tatCN21 (Vest et al. 2007;Buard et al. 2010). One can also find shorter versions like CN19 (Coultrap and Bayer 2011) and the 17-amino acid CN17 (Gomez-Monterrey et al. 2013), which have been shown to work as effectively as CN21. Transgenic animals like the T286A mutant have always been and continue to be important models for studying the roles of CaMKII in memory (Giese et al. 1998;Rossetti et al. 2017). Still, the use of inducible mutations needs to be explored further, as it avoids longterm plasticity compensations that might bias observations. Finally, investigating the role of CaMKII in different brain areas, as well as the effect of CaMKII manipulation at different time points after the process of learning and retrieval will require a collaborative, long and challenging effort from many researchers.
IGV Short Scale to Assess Implicit Value of Visualizations through Explicit Interaction This paper reports the assessment of the infographics-value (IGV) short scale, designed to measure the value in the use of infographics. The scale was made to assess the implicit quality dimensions of infographics. These dimensions were experienced during the execution of tasks in a contextualized scenario. Users were asked to retrieve a piece of information by explicitly interacting with the infographics. After usage, they were asked to rate quality dimensions of infographics, namely, usefulness, intuitiveness, clarity, informativity, and beauty; the overall value perceived from interacting with infographics was also included in the survey. Each quality dimension was coded as a six-point rating scale item, with overall value included. The proposed IGV short scale model was validated with 650 people. Our analysis confirmed that all considered dimensions in our scale were independently significant and contributed to assessing the implicit value of infographics. The IGV short scale is a lightweight but exhaustive tool to rapidly assess the implicit value of an explicit interaction with infographics in daily tasks, where value in use is crucial to measuring the situated effectiveness of visual tools. Introduction and Background Infographics are entertaining and informative in disparate domains and ways. Their popularity is the result of many consequences. Our routines, characterized by pressure and lack of time, force people to gather hit-and-run information. In this scenario, infographics may constitute an effective means that combine engaging power, aesthetic pleasure, and communication virtues [1,2]. The need for rapid message exchange and disambiguation in critical tasks and domains may benefit from the immediacy and universality of visual signs. These signs may help convey effective and efficient clues among researchers [3] and professionals [4][5][6], and act as a tool for storytelling [7], for persuasion towards behavioral change, and for education [8][9][10]. In social data, government, and institutional scenarios, a consequence of the proliferation of data beyond human capacity of elaboration is correlated with the growing use of infographics. Indeed, visual aids may help manage data in responsible and socially meaningful ways [11]. The overwhelming appearance of big data in many public and private scenes has pushed for the need for more cognitively accessible information and communication devices for sense making. Besides traditional written and computational language and media [12], it seems that infographics are mostly appropriate and at pace with current communicative needs [13,14]. Many studies are concerned with what the ingredients of their design are and how to optimize their visual features (e.g., [15][16][17]); others focus on the creation of good infographics [18], that is, on how to design infographics that explicitly and correctly match the data. An exploratory study about the social value of infographics and an extended investigation about the range of qualities of static and interactive infographics were already covered by Locoro, Cabitza, and Batini (resp. in [1,19]). The aim of the presented study is to produce and validate a short scale to be exploited as a practical tool for evaluating how infographics improve people's interaction with data, and add value to their decisions. To the best of our knowledge, this is the first work that offers a lightweight evaluation scale for infographics value in use. This scale may reveal the value of infographics at any stage of their design, development, and deployment, and can be exploited as a short and lightweight measure of the value of infographics in daily tasks. Interactions with infographics encompass implicit interaction [20] represented by quality dimensions that may affect context of use, information perception and interpretation, and decision-making [11]. Infographics also regard explicit interaction, in that users retrieve information by manipulating the visual elements of infographics for seeking answers to their queries in contexts of use. Perception and interpretation are supported by both the implicit quality dimensions of infographics (e.g., clarity in depicting information in the context of use) and explicit international moves (e.g., filtering and selecting relevant information for the situation at hand). The two aspects of interaction are hardly separable; rather, they are entangled and act in synergy during infographic use. Nonetheless, open questions revolve around the identification of intrinsic qualities of visual information that should be considered when designing infographics, as a means to improve infographic value in use [12]. The proposed measurement tool is the result of the combination of five value dimensions that are briefly introduced hereafter (their full analysis is reported in Section 2). Such dimensions are representative of the interactions between structural, informative, and aesthetic properties of infographics on the one hand, and human perception, cognition, and experience on the other hand. This synergy acts at different levels of information processing [21]. At the visual-perception level, structural details of infographics are processed and judged by people to be clear and "memorable". In this respect, the clarity and the aesthetic of infographics may play a central role [22,23]. At the highest level of cognition, the intuitiveness of structure and content may allow for the deepest and most insightful understanding of the information content of the infographics [24]. At the communication level, the most relevant dimension of infographics is content informativity. This communicative power of infographics may result in being pertinent in decision-making whenever knowledge should be acquired in its best quality and quantity, and with the least cognitive effort. As a consequence, user experience with infographics may result in having high usefulness, i.e., what remains after the experience of information use [11]. Short scales have proven their usefulness in tests with users [25] because of both their lightweightedness and simplicity of interpretation. The validation of the infographics-value (IGV) short scale reported in this study shows that a minimal set of dimensions may account for the practical value of infographics. The IGV short scale can be useful for rapid prototyping scenarios where time constraints are of crucial importance. The simplicity of the scale may allow for wide user tests, e.g., whenever the limited user's literacy for design should be bypassed in favor of design methodologies such as participatory design, end user development [26], and agile design. The contribution of this work is twofold: • a lightweight and robust tool for measuring the value in the use of infographics that can be used in the above practical scenarios where user experience and decision-making may be severely bounded by constraints such as time and low attention; • the adaptation and extension of the IGV short scale to research contexts that focus on information-visualization design that is in synergy with practice, such as visual semiotics [27], rhetoric [28], and interpretation. Value Dimensions Value dimensions of infographics include usefulness, intuitiveness, clarity, informativity, and beauty. Below, we provide definitions for each value dimension investigated in our user study that were mostly taken from the current literature on the topics of information quality [29], infographics design [15,30], and quality dimensions for information-visualization tools such as infographics [1]. We deemed that those implicit dimensions of quality were more relevant in making explicit interaction powerful and fluid with visual tools and data. These aspects are crucial in situated and contingent knowledge, social understanding, and communicative practices. In the next paragraphs, we provide an overview of the examined value dimensions: Usefulness is an individual and a collective dimension, and it can be defined as the outcomes of decisions enabled by information contained in the infographics. We focused on the use value of infographics, that is, on their usefulness for people. More precisely, we related usefulness to the extent to which behaviors induced by the use of infographics have a positive impact on individuals and to social groups. Usefulness is obviously a cornerstone for many economic theories [31]; to our discourse, it is also an important kind of value dimension for a number of reasons. For example, usefulness is more easily related to the end users' experience, and hence to the pragmatic and contingent value of infographics, rather than to the validity of its perceptual properties and design [32]. Clarity refers to the ease of understanding and fruition of information by users. Its common synonyms are readability and comprehensibility. Clarity depends on how easy the comprehension of the informational content of infographics is. This comprehension is supported by principles of symmetry, linearity [33], minimalism of graphical elements, and by the pieces of displayed information [17]. However, as [23] remarked, besides "graphical excellence", there is another sense of clarity, which is dubbed "contingent clarity". This expression means that clarity should be tailored on "audience, purpose, and context" of information fruition in "social [and] communal convention building, whereby readers interpret displays through their collective learning, experience, and values". In this sense, clarity enablers lie at the intersection of adaptable interfaces, participatory design, and the rich customization of visual display features [23]. Informativity refers to the capacity of representing all salient parts of the facts of interest. Informativity is related to content correctness and completeness, and to the quantity of conveyed meaning by informational means (Shannonian theory of communication). Furthermore, information is what remains to people after the interpretation of data according to a process of semiosis (i.e., continuous inference) that is potentially infinitive [34]. The unbounded interpretations of facts may bring selectivity and arbitrariness of informativity [28]. On the other hand, the helpful effects of diagrams lie in the objectivity and often monosemic nature of graphic signs that are mostly disambiguated by a legend [27]. This has benefits on knowledge acquisition and use [35,36]. Studies about the cognitive usefulness of information visualization and the conceptual reasoning derived by types of graphics are central in identifying the extent of informativity in relation to practical aims [21]. Examples of conceptual reasoning related to visual informativity are association, differentiation, pattern recognition, anomaly detection, and ordering. Intuitiveness refers to the organization of information in terms of context, so that infographics are capable of conveying all properties of the reality of interest at a glance; its synonyms are familiarity and immediacy. Studies on cognitive theory demonstrated that visual clues stimulated the intuitive mode rather than the rational way of decision [37]. Intuitiveness, on the other hand, is related to the mechanisms of implicit knowledge that enact direct and ready actions, and, in a sense, circumvent the uncertainty raised by rational doubt [24]. According to [38], intuitiveness has to do with the more "complex, synergistic, unpredictable and qualitative (i.e., subjective) and unexpected" forms of information processes and appropriation by human beings. Beauty refers to the look and feel of visual information as it is perceived by users, and refers to those "qualities such as orderly and clear design, e.g., consistent and structured layout, symmetry, clean and clear design" ([39] (p. 3)) that positively influence the user perception. Authors refer to classical aesthetics, which corresponds to the common notion of elegance, and distinguish this concept from the one of expressive aesthetics. The latter refers to the aesthetic pleasure of visualizing information that can positively influence user engagement. In some studies, beauty is defined as a quality of the "processing experiences of the perceiver that emerge from the interaction of stimulus properties and perceivers' cognitive and affective processes" ( [40] (p. 365)). The perception of beauty is then explained as the fluency of processing information [41]. Many studies investigate the reciprocal support of beauty and usability [39,[42][43][44][45], sometimes with contradictory results. For infographics, beauty seems to mainly be related to two aspects, namely, (i) the aesthetic aspect (an infographic is a piece of art per se) and (ii) the capacity to attract users towards the perception of the information displayed in it (without any relation to efficiency and effectiveness in interaction and use). Evaluation Test In the test designed to evaluate our short-scale model, each participant was first asked to identify with the character of a randomly proposed descriptive healthcare plot. Thus, an infographic containing information related to the scenario was proposed to the respondent. Each scenario proposed to participants a likely situation that mixed elements such as the criticality of a disease, urgency, and parental or friendship closeness at different intensity levels for the different scenarios. On the basis of this plot, each participant was asked to look at the infographics in search of the relevant information related to the situation at hand. A brief scenario plot is reported below for the three scenarios in the test: Scenario 1-Loved One's Renal Colic One of your loved ones is showing symptoms of a renal colic. You suggest to her to call for an ambulance, but she replies that she is not so ill and asks if you could personally take her to the best hospital instead. Scenario 2-Your Back Pain You have suffered from back pain for years, to the point that doing certain movements is extremely difficult for you. On the basis of some medical examination you did in the past, the diagnosis of your orthopedist is slipped disc, and the treatment of choice is surgery, to be scheduled soon. Scenario 3-Your Child's Fever Your one-year-old daughter is sick and she has high fever. As it is late at night, you cannot take her to the pediatrician, so you decide to take her to one of the city hospitals. In the following paragraphs, we introduce and describe the used infographics in our user study. Figures 1-5 show the screenshots of all charts proposed by the test participants. All infographics in the figure were about healthcare open data, and were created using Tableau software (https://public.tableau.com/en-us/s/). In their original version, they were all provided with tooltips that appeared when the users hovered the cursor over graphical parts in order to add detailed information in textual or numerical form to the pointed part. Infographics in Figures 1 and 2 are related to Scenario 1. They report hospital data related to renal and urinary health problems. In particular, they focus on the proportion of second inpatient stays over the total number of inpatient stays in the same hospital for these diseases. Second stays refer to the same patient in the same hospital of their first stay in the same year and for the same disease. The same two values were also compared to those of the preceding year. Hovering the mouse pointer over the segments or bars the infographics visualizes information about the hospital of interest: the name, the total number of inpatient stays, the percentage of second stays, and the year of reference. Infographics in Figures 3 and 4 are related to Scenario 2. The two infographics report the comparison between the average inpatient-stay period observed for some hospitals, expressed as average number of days for patients with a specific disease and the threshold value, i.e., the average number of maximum days above which the stays is believed to be anomalous. Hovering the mouse pointer over the items the infographics visualizes the threshold value and the average inpatients stay (expressed in average number of days as well as in percentage) for the group of diseases concerned. The last infographic in Figure 5 is related to Scenario 3. It shows hospitals in a geographical map with bubbles of different dimensions and colors. The text areas and check boxes beside the map report the presence of an emergency room and a pediatric emergency room. By hovering the mouse pointer over the bubbles, a tooltip shows information about the hospital such as the presence or absence of the two kinds of emergency rooms. for same disease in the same hospital (each stick is a hospital). Upward sticks denote worsening performance (increase of second inpatient stays for same disease); downward stick denotes improving performance (decrease of second inpatient stays for same disease). Gradient from green (lower % of second inpatient stays) to yellow (higher % of second inpatient stays) depicts with the color dimension the same increase/decrease in data of the stick orientation. Bar chart with average number of days (y-axis) by hospital of inpatient stays for some diseases and surgical operations expressed both in absolute value (height of bars in gray with relative legend "valore soglia", which stands for threshold value) and in percentage ("% degenza media" represented in percentage, and colored gradient from red, higher percentage, to yellow, lower percentage) with respect to the threshold value. Examples of depicted surgical operations are arthroscopy, and back, hernia, and hand surgery. The participants were asked to find the information they needed in each infographic. At the end of the task, the participants were asked to evaluate the value of the infographics according to five dimensions and to rate the overall value of the infographics for the scenario of use. A page with five sliders, each associated with a quality dimension, was presented to the respondents. The items were presented as single words (e.g., useful, intuitive, clear), and a brief definition of the term used for the quality dimension was provided in the same page of the questionnaire. Each slider could be moved left to right to rate how much (on a six-point scale from very little to very much) participants perceived that the infographics with which they had just interacted provided: Beauty. Lastly, participants were asked to rate the overall experience with the infographics at hand on the same six-point rating scale as above. Respondent Sample In order to reach as many citizens as possible, the test was advertised through the Open Data portal of a municipality calling for citizen volunteers. The questionnaire was administered online as a computer-assisted web interview (CAWI). Each respondent reached a version of the questionnaire so that they could access the survey only once. Precise instructions were given in the Open Data portal about compatible browsers, preferred devices (PC first and tablet as a second choice), and information about the duration of the questionnaire (around 10 min). The questionnaire was administered in Italian. Among the citizens that participated in the test, 650 were considered valid and included in the analysis from a total of 732. The respondent sample was characterized as follows: • 51% female (n = 334), 49% male (n = 316); • 44% of the respondents were younger than 30 years old (n = 281), with a lower age limit of 21; 48% were older than 30 years old (n = 312); 9% (n = 57) did not declare their age. Results We analyzed the given responses to the evaluation of the infographics, and built the model on the basis of multiple linear-regression result analysis to create and validate the proposed IGV short scale. Statistical assessment was done under the hypothesis that all quality dimensions were relevant to determine overall value through a linear combination of their values. For each dimension, a null hypothesis of no significant relevance was hence tested. The statistics were computed with the support of IBM SPSS (v.24). Results are reported according to the APA style [46]. Tables and figures help to better clarify and synthesize the results. The correlation levels among the variables are reported as described in [47]. Multiple-Regression Analysis All explanatory variables of our five-dimension model showed positive variance, as shown in the violin plots of the descriptive statistics of Figure 6. Regression analysis computes a linear function of the following form: where X is the matrix whose column vectors are the explanatory variables of our model, β are the coefficients that should be estimated in order to obtain a best-fitting model function, and ε are residuals (errors) of predicted values by the model function compared with the observed values. This linear function, when applied to our data, yielded a regression-function model with the following analytical form: where β 0 is the linear-function intercept, x 1, ..., 5 are the input values of the five value dimensions of the infographics, and β 1, ..., 5 are their linear coefficients. In our model, we imposed as the dependent variable the y i term in the above formula-the "overall experience" item values. In the following, we report the necessary tests to verify that the proposed scale could be modeled with a multilinear-regression function of the kind depicted above. Specifically, data were tested versus proportionality to continuous scales, reliability, and interactions, analysis of residuals, and model fitting (e.g., linearity and presence of outliers). Proportionality of Ordinal Items to Continuous Items The IGV short scale encompasses items in terms of semantic differentials that is, six-level scales with explicit anchors at the extremes and no other indication for the other levels' values. We assumed that these items could be treated as a full Likert scale that, under precise assumptions (see [48] for more details), makes a Likert-valued scale isomorphic to a continuous scale, and can consequently be analyzed and validated with a linear-regression model. In order to show this isomorphism, we report the results of a linear-regression run on a dichotomized version of the covariates (i.e., the five quality dimensions). We verified that the one-unit increment of each covariate (independently of its passage, it being from 1 to 2 or from 5 to 6) increased, on average, the overall value of the scale, all other variables being equals. This result demonstrated that this variation is proportional to that of a one-unit increment in a numerical scale. As introduced above, all items were codified as dichotomous dummy variables with the following encoding: Table 1 reports the resulting model, where it was observed that all variables were statistically significant with the exception of InformativityM. For all five dimensions, the increase of the score category increased the overall perceived score. This result confirmed our hypothesis that our six-point rating model is isomorphic to a continuous-point model. Item Reliability, Variable Interactions, and Multicollinearity The Cronbach's alpha reliability coefficient for the five dimensional items of the IGV short scale was 0.90, and correlations among items were all strong to very strong and positive, ranging from 0.57 to 0.80, with p < 0.001. Due to the strong correlations between the explanatory variables, a check for statistically significant interactions and a multicollinearity test were executed on the data to exclude item collinearity and multicollinearity. An n-way ANOVA test served to examine statistically significant interactions between explanatory variables. In Table 2, the authors reported the results of the between-subject-effect test and showed that no significant interactions occurred, with the exception of two pairwise interactions over 26: Usefulness and Beauty, and Usefulness and Clarity. Multicollinearity analysis is reported in Table 3. In order to compute how much of the model variance was explained by our variables, and whether our variables were mainly explaining our model (and no extra dimensions), we ran a multiple-regression process over them and observed the tolerance statistics, the variable inflation factor (VIF), and the eigenvalue-decomposition matrix. The tolerance factor, that is, the difference between total unit variance explained by the model and variance explained by all other explanatory variables, was measured for each explanatory variable. Its reciprocal, the VIF, was interpreted as the proportional increase of variance observed in no multicollinearity because of the multicollinearity between each of the explanatory variables, and one or more of the other explanatory variables. Acceptance thresholds for these two metrics were a VIF under 10 and a tolerance factor above 0.10 or 0.20 (the so called "rule of 10", see, for example, [49]). Tolerance and VIF are leave-one-out measures of linear dependency among variables and an estimation of their redundancy; these two measures cannot determine the structural form of linear relations [50], nor can they say anything with regard to other forms in which variables can turn out to be related. For this reason, we also examined the decomposition matrix provided by SPSS and reported in Table 4, where the coefficient matrix was decomposed in its principal-component eigenvalues, condition indices, and proportion of variance. Although eigenvalue decomposition and the related indices are weaker indicators of relevant collinearity compared to the VIF and tolerance measures [51], we also inspected them as a further confirmation of the previous tests, considering weak multicollinearity [52]. Residual Analysis Raw data must be checked against residuals over the regression model, and some requirements on residuals should be met, namely, • var(ε i ) = σ 2 : homoscedasticity, i.e., homogeneous variance of residuals along the linear model; • Cov(ε i , ε j ) = 0, i = j: independence, i.e., absence of autocorrelation of residuals or of meaningful patterns in the residuals; • ε i ∼ N(µ, σ): normality, e.g., residuals should be normally distributed. Figure 7 shows the linear relationships of each explanatory variable with the outcome variable, and the relation between residuals and predicted values of the multiple-regression model. A best-fit line was drawn in each scatterplot. The scatterplot of standardized predicted values showed that the data met the assumptions of variance homogeneity. The data met the assumption of independent errors (Durbin-Watson value = 1.93). Lastly, in Figure 8, the histogram of standardized residuals indicated that the data contained approximately normally distributed errors, as did the normal P-P plot of standardised residuals, which showed points completely on the normal distribution line. Absence of Outliers, Leverage, or Influential Points That May Bias Model Coefficients Analysis of influential points was carried out by computing Cook's distance and DFBETA measures. Although they are computationally very similar, there is a difference between them. Cook's distance simultaneously measures the influence of points over all parameter estimates; DFBETA measures the same influence for each observation on each explanatory variable at a time. Both of them compute the regression coefficients for the regression model with a particular case excluded, and then recompute the model with the same case included. Cook's distance is computed for each observed value as the difference between the observed value and the same value observed by cutting off each case from the model input. DFBETA is computed for each explanatory variable, and it is the difference between the observed value and the same value observed by cutting off each case from the model input. Both analyses showed that the dataset contained neither outliers nor influential points that may have hindered the model. Maximal Cook's distance was <1 [53]. The five DFBETAs computed on each variable were all below the cut-off value of 0.08, according to formula 2/ √ n, where n is the number of observations [54]. IGV Short-Scale Measurement Tool After running multiple-regression analysis, it was found that the five dimensions of value envisioned for our scale explained a significant amount of the variance in the overall value of infographics (F(5, 632) = 510.98, p < 0.001, R 2 = 0.80, R 2 Adjusted = 0.80). Figure 9 shows the structure of the model with R 2 and the coefficient estimation, and Table 5 shows the standard error, CI (at 95% level), estimated coefficients, standardized estimated coefficients, and associated p-values. In summary, once a user obtains an ordinal rating on the Clarity (C), Usefulness (U), Beauty (B), Intuitiveness (IT), and Informativity (IF) of one infographic, it is straightforward to yield the total perceived value by using the following formula: All of the variables contributed in a statistically significant way (p-value < 0.001) and were associated with a positive coefficient, so that the overall score grew when one of the value dimensions grew, all other dimensions being equal. Usefulness, Clarity, and Beauty are associated with a higher coefficient value and have more of an impact on the overall perceived value with respect to the two other dimensions, i.e., Informativity and Intuitiveness. Discussion In this study, we proposed a model for the measurement of the value of infographics, and validated the IGV short scale. We conceived a list of value items for infographics, and explored their relevance for users with an online questionnaire aimed at assessing the overall value of infographics as information and communication tools. The concept of infographics value in use or value in practice regards both implicit and explicit interaction parts; in a way, value is an effect of implicit quality and a consequences of explicitly querying infographics for information. Implicit quality is related to the extent and cognitive ease with which users can perceive and interpret information by exploiting infographics to achieve some relevant goal; explicit interaction is related to the long-term gain in value related to the decision-making based on valuable infographics. Value in use is thus the result of many implicit and explicit factors and sometimes the effect of unpredictable use and unanticipated aims. The five value dimensions of Usefulness, Clarity, Beauty, Informativity, and Intuitiveness proposed to the users were statistically significant in defining the overall value in infographic use. The validation of the model depicted in Table 5 shows that all were significant with p-values < 0.001. The same table shows that all value items contributed in an equal way to the overall perceived value for the infographics, with the slight dominance of the dimensions of Usefulness, Clarity, and Beauty. Furthermore, Table 1 shows how the increase in the score of each value item doubled the probability that the infographic value would be in the upper half of positive scores, i.e., from 4 to 6. Our initial claim can be considered as fully plausible and partially validated by this study, and hence worth further investigation. In summary, the more an infographic is perceived as able to satisfy concrete and situated information needs in a daily context and practices, the more it is perceived as valuable. These results may suggest that, on the one hand, data have value if they are made accessible and comprehensible; on the other hand, their value lies in the comprehension itself, in the acquisition of information, facts, notions, procedures, and in the resulting knowledgeable actions. These actions (e.g., being informed, making a decision, or choosing among alternatives), in turn, produce some positive effects. Our final claim calls for strong generalizability of this user study, which could provide a proof of concept of value in use, of its feasibility through design, and of its utility in the evaluation of infographics. Further studies should attempt to understand how to obtain resources of adequate value, and hence perceivable as useful, clear, beautiful, intuitive, and informative; in this regard, a combination of many implicit factors includes immediacy of communication, rapid exploitation, and appropriation of the information content, and aesthetic pleasure. These factors were all relevant in this experiment, as shown by the yielded results. Another aspect that should raise further questions is related to the lack of difference between respondent strata. For example, the current survey did not result in any statistically significant divergence for either gender or age. Study Limitations and Conclusions A limitation of this study is the lack of comparison between infographics and other informational devices such as tabular data or semantic graphs, respectively. The study did not investigate priorities between value dimensions or further sub-dimensions in search for independent and significant subfactors. Rather, all dimensions were considered atomic, equally influencing user perception, and independent. Further investigating decomposing aspects or the peculiar influence of implicit and explicit factors may enrich the picture and make the model more detailed. The lack of basic studies on measurement tools and short-scale versions of infographics drove us to investigate this direction first. Future research could proceed in both width and breadth. In width, by comparing and extending evaluations to other information devices. In breadth, by extending measurement to many other dimensions of the cognitive sphere of interaction, and to their interdependencies with currently evaluated quality dimensions. Dimensions in the cognitive sphere are, for example, information retention in post-usage or knowledge reinforcement in combination with other knowledge sources and devices. Interactive dimensions are, for example, responsiveness. Other dimensions are engagement and its emotional counterpart, which may also shed further light on the value in the use of infographics. Funding: This research received no external funding.
Optimal metal domain size for photocatalysis with hybrid semiconductor-metal nanorods Semiconductor-metal hybrid nanostructures offer a highly controllable platform for light-induced charge separation, with direct relevance for their implementation in photocatalysis. Advances in the synthesis allow for control over the size, shape and morphology, providing tunability of the optical and electronic properties. A critical determining factor of the photocatalytic cycle is the metal domain characteristics and in particular its size, a subject that lacks deep understanding. Here, using a well-defined model system of cadmium sulfide-gold nanorods, we address the effect of the gold tip size on the photocatalytic function, including the charge transfer dynamics and hydrogen production efficiency. A combination of transient absorption, hydrogen evolution kinetics and theoretical modelling reveal a non-monotonic behaviour with size of the gold tip, leading to an optimal metal domain size for the most efficient photocatalysis. We show that this results from the size-dependent interplay of the metal domain charging, the relative band-alignments, and the resulting kinetics. T he synergistic optical and chemical properties of semiconductor-metal hybrid nanoparticles lead to lightinduced charge separation 1,2 , opening the path for their function as photocatalysts in redox reactions, including hydrogen generation by water splitting [3][4][5][6][7] , generation of radicals and photodegradation of organic contaminants 8 . The complexity of the photocatalytic cycle, in particular for the fuel generating water splitting reaction, requires a reductionist approach to address separately the effect of size, shape and composition of each component towards the rational design of an efficient photocatalytic system. The role of the exciton dynamics, related to the semiconductor component, has been investigated in particular in prototypical TiO 2 -based systems and in colloidal semiconductor-metal hybrid nanorods (NRs). These studies pointed out to the potential of tailoring the semiconductor component in the hybrid nanoparticles, permitting fine tuning of the light absorption and electron-hole dissociation as preliminary steps to charge separation and catalysis [9][10][11] . The metal co-catalyst component has a particularly important role in the photocatalytic cycle. The charge separation from the semiconductor is directly affected by the metal domain characteristics and the actual catalysis occurs on its surface 12,13 . Control over the size of the metal domain and its optimization is an essential parameter for the rational photocatalytic system design. The size dependence of catalysis on bare Au islands deposited on titania was investigated revealing sharp optimal catalytic performance for CO oxidation at island thickness of B2 atomic layers corresponding to B3 nm in diameter 14 , and in the seminal work of Goodman and coworkers, this was attributed to a metal to non-metal transition 15 . In relation to photocatalytic activity, early studies discussed the effect of size on Fermi-level equilibration related to charging of the metal domain following irradiation of the system 16,17 . The effect of the metal co-catalyst size, particularly on the hydrogen generation, was addressed in the context of TiO 2 -Au and CdS-Pt with multiple metal domains 18,19 . While the former observed no size effect for the Au domain varying between 3 and 12 nm, the latter has reported optimal hydrogen generation in extremely small Pt clusters (B50 atoms). However, a detailed mechanistic description, particularly in view of these diverse behaviours, is missing and is important for further development of such hybrid nanoparticles in the context of photocatalysis. In this work we address the effect of the metal co-catalyst size on the photocatalysis process, using hybrid semiconductor-metal nanoparticles with a single catalytic domain as a model system. By a combination of ultrafast transient absorption measurements, hydrogen generation yield study and theoretical modelling, we observe and explain a non-monotonic metal domain size dependence of the hydrogen generation efficiency. Results Size-controlled Au tip growth on CdS NRs. CdS NRs of 31.6 nm length and 3.9 nm diameter were synthesized using modification of a previously reported procedure employing seeded growth 20 ( Supplementary Fig. 1). Site-selective Au deposition on a singlerod apex, with high control of the metal tip size, was achieved by combining two different synthetic approaches consecutively (Fig. 1a). First, a dark reaction was used to obtain site-selective small metal islands growth on the apexes of the CdS NRs (refs 21,22). As can be seen in the transmission electron microscopy (TEM) image in Fig. 1c, a single small metal-tip of B1.5-1.8 nm diameter grows on one apex with narrow size distribution (B7%). This selective growth takes place due to the favoured surface reactivity of the (001) facet of the CdS rod, that encourages the heterogeneous nucleation of gold. Furthermore, the use of long alkyl chain amines such as octadecylamine, instead of shorter ones, minimizes Au nucleation on the less reactive facets such as the sides of the rod. Owing to the length of the alkyl chain a phase transition to a static phase occurs at lower temperatures thus blocking the Au precursor's access to the NR surfaces 23 . These small Au domains serve as seeds for the second step, using light-induced Au growth under inert atmosphere and at low temperature, 2-4°C (Supplementary Methods). This approach allows for size control by changing the irradiation times and the Au 3 þ /NRs ratio. Figure 1d-f show TEM images of CdS-Au hybrid nanoparticles with different Au tip sizes and with narrow size distribution (Fig. 1g). Figure 1b presents the absorption spectra of the bare CdS NRs and CdS-Au hybrid nanoparticles with different Au tip sizes. All spectra exhibit a similar sharp rise at 460 nm related to the onset of the CdS NRs absorption. Several absorption features are seen to the blue of the onset, related to the band gap and to higher excited optical transitions of the CdS NRs, which signify the good size monodispersity of the sample. A plasmon peak develops at 540 nm, correlated with the increase of the metal tip size. Phase transfer to aqueous solution was performed with polyethyleneimine which was recently reported as high performance surface coating for photocatalytic applications and provides good colloidal stability 11 . Transient absorption measurements of charge separation. The effect of the different Au domain sizes on charge carrier dynamics was studied using broadband ultrafast transient absorption spectroscopy. In Fig. 2a we show the time-resolved differential transmission (DT/T) spectra for the CdS NRs and the series of Au tip sizes. Following 450 nm optical excitation and formation of excited electron-hole pairs, the DT/T spectra at early delay times reveal a pronounced bleach around 450 nm in both CdS NRs and CdS-Au hybrid NRs attributed to depletion of the first excitonic transition in the CdS rods due to electron state filling 24 . In addition, for the larger Au tips, a broad bleach feature develops B540 nm, which corresponds to the plasmonic feature of the Au tips 25 . The decay of the plasmonic feature, by electron-phonon scattering, showed no size dependence and the measured lifetime for all Au domain sizes was 1.5-1.7 ps (Supplementary Figs 6-8) consistent with prior reports for colloidal Au nanoparticles 25,26 . Comparison between the normalized transient absorption dynamics for the bleach recovery in the spectral region of the CdS band gap exciton is presented in Fig. 2b for the different Au tip sizes. In the bare CdS NRs (upper panel, Fig. 2b), a fast decay component with a small amplitude is observed, and is related to residual cooling of the electrons to the CdS conduction band edge. This is followed by a long decay component, corresponding to exciton recovery to the ground state. These bleach recovery kinetics were fitted to a bi-exponential function and resulted in a time constant for electron-hole recombination of 3.4 ns (Supplementary Table 3). At the presence of a metal tip a clear additional timescale is introduced, slower than the rapid cooling and faster than the electron-hole recombination. The amplitude of the intermediate decay process increases with increasing metal domain size. We assign this additional timescale to the electron transfer from the CdS to the metal tip. The measured kinetic decays were fitted to a tri-exponential function (Supplementary Note 3). The charge transfer rates extracted from this fit were found to be 16 ps for 6.2 nm, 29 ps for 4.8 nm, 103 ps for 3.0 nm and 770 ps for 1.6 nm Au tips sizes, with amplitudes decreasing from 40 to 15% for smaller tip sizes. Photocatalytic hydrogen production efficiency measurements. The photocatalytic activity of the same hybrid nanoparticles was ARTICLE NATURE COMMUNICATIONS | DOI: 10.1038/ncomms10413 measured via the light-induced reduction of water at 405 nm to produce hydrogen in the presence of Na 2 S-9H 2 O and Na 2 SO 3 , acting as sacrificial hole scavengers as depicted schematically in Fig. 3a (inset) 27 . The amount of hydrogen gas produced was determined using a gas chromatograph equipped with a thermal conductivity detector. Figure 3a, blue curve, shows the hydrogen generation rate versus Au tip size. A weak dependence, observed for the two smallest sizes, is followed by a marked decrease of the rate in the larger tips, overall by a factor of nearly 10 comparing the maximum and the minimum rates. These results represent the actual hydrogen evolution rates, but they do not account for photons that are absorbed directly by the intraband transitions of the metal tip and do not contribute to the generation of hydrogen due to their rapid relaxation. The total absorption of such CdS-Au hybrid nanoparticles can be considered in first approximation as the superposition of the contributions of the exciton and the plasmon 28 . The red curve in Fig. 3a corrects for this, by normalizing the rates to the semiconductor component absorption. The actual concentration of each sample was obtained by inductively coupled plasma mass Therefore the red curve is normalized to the overall Cd content, which is proportional to the contribution of the semiconductor to the nanohybrid absorption. This normalization is therefore expressing more cleanly the essential metal domain size effect in hydrogen reduction. It reveals a non-monotonic dependence in which an intermediate Au tip size provides the optimal hydrogen evolution rate for the photocatalytic water reduction and the highest value for the hydrogen evolution quantum yield (QY). Additional aspect of hydrogen generation rate normalization can be considered to isolate the Au metal domain effect on the efficiency of this photocatalytic process. This may be considered by normalizing the hydrogen production rate to the loading of the co-catalyst component 18 . X-ray photoelectron spectroscopy measurements on the hybrid nanoparticles (HNPs) with different Au metal domain sizes show a dominant doublet at the Au 4f binding energy (BE) consistent with the zero-valent Au peak (84.0 eV), which designates the metallic nature of these domains. Intensities of the Au 4f 7/2 and 4f 5/2 peaks show a pronounced increase in the intensity correlated to the larger metal domain sizes (Supplementary Fig. 4). Quantification of the atomic concentration percent of each element in the HNPs structure presented in Supplementary Table 2 allows the calculation of the Cd:Au ratio in each HNPs sample. The nearly 10-fold increase of this relative atomic concentration of the Au between the different metal domain sizes indicates the significantly higher loading of the Au metal on the CdS semiconductor NR structure with increased Au tip size, while the minor change in the Cd or S atomic concentration implies lack of substantial change in the rod dimensions. Consequently, normalization of the hydrogen production rate by the Cd:Au ratio is considered as normalization to the Au metal loading. The results of such correction for the measured hydrogen generation rates are fully consistent with the CdS absorption normalization obtained by ICP-MS discussed above, and reveal a similar non-monotonic behaviour as function of Au metal domain size (Supplementary Fig. 5 and Note 2). Discussion To rationalize the size dependence of the non-monotonic photocatalytic behaviour along with the charge transfer dynamics we propose a minimalistic model consisting of several kinetic steps, as sketched in Fig. 3b and Supplementary Fig. 9. The photoexcited electron in the semiconductor can relax by electron transfer to the co-catalyst metal domain (with a rate given by k ET ) or by electron-hole recombination (with rate k e-h ). The latter is rather slow 1 compared with the other processes and is assumed to be constant independent of the Au tip size (k e À h ¼ 1 5 n sec À 1 ). The electron on the metal tip can either promote water reduction reaction (k WR ) or undergo back recombination with the hole that is left behind on the semiconductor domain (k rec ), which is modelled by the hole transfer from the semiconductor to the metal tip. An additional important process is the trapping of the hole (k ST ) with a rate that is comparable to that of electron transfer into the metal tip 29 . Once the hole is trapped it blocks the electron transfer channel by the electron-hole Coulomb interaction, which also leads to a localization of the electron 11,30 . The efficiency of hydrogen production can be obtained in a closed form and is given by the t-N of the solution to the master equation (Supplementary Note 4): to make contact with the measured hydrogen production efficiency the different rates appearing in equation (1) need to be determined. In fact, some of these rates can be determined independently by combining the transient absorption measurements with a suitable theoretical model. The semiconductormetal electron transfer (k ET ) and the back recombination (k rec ) rates can be described by Auger processes with values given by Fermi's golden rule: and where r is the metal domain radius, ' is the Planck's constant divided by 2p, m à e and m à h are the effective mass of the electron and the hole in the metal, t e and t h are the electron and hole tunnelling matrix elements (assumed to be independent of r), e c , e v , f(r) and e F (r) are the semiconductor conduction and valance band energies, the metal work-function and the metal Fermi-level measured from the bottom of the energy band. Note that both rates depend on the metal tip radius r through the steep dependence of the density of states on the tip volume (r 3 ), and weakly through the dependence of the Fermi energy 31 (e F (r)) and work function 32,33 on r where g is the surface tension, v M is the molar volume, z is number of transferred electrons, F is Faraday's constant. The water reduction reaction on the metal co-catyalyst is described by the cathodic rate in the Butler-Volmer model 34 for redox reactions. The electron transfer process is given by a Marcus-like expression 35 : where k WR is the electron reduction rate of an absorbed water molecule, k 0 WR is the standard rate constant, R is the gas constant, T is the temperature, a is the electron transfer coefficient which determines the symmetry of the transition state (for example, a ¼ 1 2 corresponds to a transition state with equal contributions from the reactants and products), and e W is the water reduction potential. The anodic rate for the hydrogen oxidation (back reaction) can be neglected because the hydrogen concentration is small compared with the proton concentration at the experimental pH level. In contrast to the electron transfer and recombination rates given by equations (2) and (3), k WR depends exponentially on size of the metal domain through the dependence of f(r). We now have a working model to analyze the governing factors in the metal domain size effect on electron transfer and photocatalytic efficiency, and compare them with the experimental results. First, in Fig. 3c we determine k ET by fitting the measured electron transfer rates obtained from the transient absorption to equation (2). The overall size dependence of k ET is in good agreement with the theoretical r 3 prediction. The only fitting parameter used is the tunnelling matrix element, t e ¼ 5.6  10 À 5 eV. This seemingly small value is consistent with the results of Kamat on semiconductor-metal oxide hybrid nanoparticles 36 . The remaining parameters were taken from the literature and are provided in Supplementary Note 4. Next, we calculate the efficiency of hydrogen production and compare the prediction to the experimental values, as shown in Fig. 3d. To capture the essential non-monotonic behaviour observed experimentally, we studied the dependence of the QY on the various parameters as shown in detail in the Supplementary Figs 10-12. In particular, the dependence on the hole tunnelling matrix element, t h , was studied ( Supplementary Fig. 10) and a reasonable description of the measured efficiencies is obtained for t h ¼ 1.1  10 À 6 eV, up to 50-folds smaller than t e to yield meaningful results. This low value corresponds to an electron residence time on the Au tip sufficiently long to allow for effective water reduction. Indeed, the back recombination rate under these parameters for the different metal sizes is in the range of sub-msec to several msec for large to small Au domain sizes respectively, consistent with experimental results for Pt tips on CdS rods 1 . Two other parameters are needed for the qualitative description of the experimental results by the model. For the metal surface tension, g, several reported values for gold nanoparticles were tested ( Supplementary Fig. 11) 32,37,38 . Reasonable surface tension values (g) are between 2.5 and 4 J m À 1 . Higher surface tension values result in high work function values for the metal domain above the semiconductor conduction band minimum leading to vanishing electron injection rates. Furthermore, in larger tip sizes the efficiency is overestimated due to large overpotential relative to the standard water reduction potential at the experimental pH. Finally, for the electron transfer coefficient, a, the fits yield values around a ¼ 0.25 ( Supplementary Fig. 12), in the range of values reported in the literature for similar systems 34,39 . This implies that the transition state for the water reduction is closer to the reactants. A particular kinetic model solution is presented in Fig. 3d (solid blue curve) and manifests the non-monotonic dependence of the QY on the Au tip size consistent with the normalized experimental QY (squares connected by dashed line). The non-monotonic dependence arises from the opposing behaviour of the QY in the limits of r-0 and r-N. For the former, we find that QYpr 3 increases with the tip radius, as the rate determining step is electron injection into the metal tip. For the latter, QY / exp À 2gevMa zRT 1 r À Á decreases exponentially with the tip radius where the rate determining step is the reduction of water as the back hole transfer competes with the reduction. Hence, an intermediate Au tip size provides an optimum balancing between the charge separation rate and efficiency, and the back recombination competing with the water reduction. In conclusion, an optimal metal domain size is found for photocatalytic water reduction using hybrid semiconductor-metal NRs. The optimal value is explained in terms of competing processes where for small tips the hydrogen evolution QY is mainly determined by the rate of electron injection to the metal tip, whereas for large tips it is determined mainly by the water reduction on the metal surface. These two limits show opposite dependence on the metal domain size, leading to an optimal value as explained by a minimalist and general kinetic model with parameters fitted to reproduce qualitatively the experimental results, in particular after normalizing out the metal domain absorption effects. Thus, the behaviour is general and not limited to the metal type or the reduction reaction, and can be used for rational design of photocatalysts based on hybrid semiconductormetal nanostructures. and Tecnai F20 G2, respectively. All size statistics are done with 'Scion image' programme on 200 particles. Absorption was measured with a JASCO V-570 UVvis-near IR spectrophotometer. Extinction coefficient values of the nanoparticles were calculated using a previously reported method 40 . Methods Inductively coupled plasma mass spectrometry measurements. Following hydrogen generation kinetic measurements a 100 ml of HNPs solution was etched overnight in 1 ml of 69% nitric acid. Following sonication, 100 ml of the HNPs solution was mixed with 3.35 ml of three distilled water and analysed by ICP-MS (c  7500, Agilent) for Cd. The quantity of Cd in each solution was calculated using external calibration with standard Cd solutions (Supplementary Table 1). Hydrogen evolution rate and efficiency measurements. To determine and measure the evolved hydrogen gas from the photocatalytic reaction using the HNP model systems, the following set-up is used. The photocatalysts were dispersed in three distilled water solution (2 ml; optical density, OD B1 at 405 nm). The photocatalyst solution was placed in a quartz cuvette and hole scavengers, Na 2 S-9H 2 O and Na 2 SO 3 , (typically 0.05 and 0.07 M, respectively), were added to the water. The solution is purged with argon for 20 min and stirred. The hybrid nanoparticles were then illuminated with 40 mW 405 nm laser. Aliquots of the reaction vessel head space were taken using a gas tight syringe at different time intervals and detected and quantified using Varian gas chromatograph (model 6820) equipped with a molecular sieve (5 Å) packed column and a thermal conductivity detector. The resulting chromatograms and hydrogen concentration are obtained by the comparison to a calibration curve of known hydrogen amounts (Supplementary Fig. 3 and Note 1). Ultrafast transient absorption measurements. The laser system employed for ultrafast transient absorption was based on a Ti-Sapphire chirped pulse amplified source, with maximum output energy of B800 mJ, 1 kHz repetition rate, central wavelength of 800 nm and pulse duration of about 150 fs. Excitation pulses at 400 nm were obtained by doubling the fundamental frequency in a b-barium borate crystal while other pump photons at different wavelength were generated by non-collinear optical parametric amplification in b-barium borate, with pulse duration B100 fs. Pump pulses were focused to 175 mm diameter spot. Probing was achieved in the visible range by using white light generated in a thin sapphire plate, and in the UV-visible range by using a thin Calcium Fluoride plate. Chirp-free transient transmission spectra were collected by using a fast optical multichannel analyser with dechirping algorithm. The measured quantity is the normalized transmission change, DT/T.
Raw meat based diet influences faecal microbiome and end products of fermentation in healthy dogs Background Dietary intervention studies are required to deeper understand the variability of gut microbial ecosystem in healthy dogs under different feeding conditions and to improve diet formulations. The aim of the study was to investigate in dogs the influence of a raw based diet supplemented with vegetable foods on faecal microbiome in comparison with extruded food. Methods Eight healthy adult Boxer dogs were recruited and randomly divided in two experimental blocks of 4 individuals. Dogs were regularly fed a commercial extruded diet (RD) and starting from the beginning of the trial, one group received the raw based diet (MD) and the other group continued to be fed with the RD diet (CD) for a fortnight. After 14 days, the two groups were inverted, the CD group shifted to the MD and the MD shifted to the CD, for the next 14 days. Faeces were collected at the beginning of the study (T0), after 14 days (T14) before the change of diet and at the end of experimental period (T28) for DNA extraction and analysis of metagenome by sequencing 16SrRNA V3 and V4 regions, short chain fatty acids (SCFA), lactate and faecal score. Results A decreased proportion of Lactobacillus, Paralactobacillus (P < 0.01) and Prevotella (P < 0.05) genera was observed in the MD group while Shannon biodiversity Index significantly increased (3.31 ± 0.15) in comparison to the RD group (2.92 ± 0.31; P < 0.05). The MD diet significantly (P < 0.05) decreased the Faecal Score and increased the lactic acid concentration in the feces in comparison to the RD treatment (P < 0.01). Faecal acetate was negatively correlated with Escherichia/Shigella and Megamonas (P < 0.01), whilst butyrate was positively correlated with Blautia and Peptococcus (P < 0.05). Positive correlations were found between lactate and Megamonas (P < 0.05), Escherichia/Shigella (P < 0.01) and Lactococcus (P < 0.01). Conclusion These results suggest that the diet composition modifies faecal microbial composition and end products of fermentation. The administration of MD diet promoted a more balanced growth of bacterial communities and a positive change in the readouts of healthy gut functions in comparison to RD diet. Electronic supplementary material The online version of this article (doi:10.1186/s12917-017-0981-z) contains supplementary material, which is available to authorized users. Background Faecal microbiome in humans as well in animals is affected by several factors [1][2][3] and, among the others, diet and clinical conditions are likely the most important in dogs [4]. Clinical studies on dogs highlighted that the most recurrent faecal microbiome changes associated to gastro intestinal pathological conditions are typically a drop of biodiversity, an under or overgrowth of some distinct microbial communities and poor faecal quality [5][6][7]. However, an unequivocal identification of bad and good microbes at the different taxonomic level is not reported yet, since clinical-observational studies can intrinsically be biased from the difficulty to control some of the several confounding factors affecting gut microbiome of healthy and unhealthy dogs, as diet compositions, breed, gender, age, environmental and living conditions. Recently, research has been carried out to clarify the role of diet on the modulation of faecal microbiome [8][9][10][11][12][13]. The studies have also highlighted the role of the intestinal microbiota in energy harvesting and in obesity development in dogs [14,15] as in humans [16]. However, in these studies a large inter individual variability has been observed, suggesting that several other factors can influence the intestinal microbiome of dogs, which require to be understood and considered in population studies. Dietary intervention studies are thus required to investigate the composition and the fluctuations of microbial community in healthy animals, to better understand the variability of gut microbial ecosystem under different feeding conditions, to improve diet design, to identify disease biomarkers and to develop target drug therapy [4]. Considering that several factors can affect gut microbiota, we sought to examine the effect of an abrupt change from extruded to raw meat based diet on the fluctuation of faecal microbial community, end product of fermentations and stool quality in a case control study in adult Boxer dogs. The approach used in the study is aimed at testing whether the change of dietary ingredients can modify faecal microbiome and whether the return to the initial dietary regime can re-establish the microbial profile. Animals and housing Eight healthy adult Boxer dogs housed in the same kennel, 5 females and 3 males, aged 4.2 ± 2.8 years, were recruited for the study. There was a couple of half sib dogs, male and female, which were allocated to each experimental group, whilst the others subjects were unrelated. Dogs were housed in pairs in 6x3 m enclosures, where a 2×3 m roof covered the paved portion of the pen. The sheltered areas were provided with beds for each dog and were used also for feeding, with water always available. The study was conducted in late autumn in North-East Italy, with an average temperature during the period of 10-15°C and 60-70% relative humidity. During the day the dogs in pairs were allowed to exercise in 10×20 m green areas. At the beginning of the study, the average live weight was 30.3 ± 3 kg and all dogs had Body Condition Score (BCS) 4/9. The good clinical condition was confirmed by clinical examinations and blood biochemical analysis. All protocols, procedures and the care of the animals complied to the Italian legislation on animal care (DL n.116, 27/1/1992), and no ethical approval was required at the time the study was conducted. Diets Up to the beginning of the study, the dogs had been fed a commercial extruded complete diet which was used as Reference diet (RD). The experimental diet (Mixed Diet, MD) was composed by raw human grade beef meat, representing about the 70% of the diet (w/w, for chemical composition see Table 1) added with a complement specifically formulated and manufactured for the study and provided by Nutrigene srl (Udine, Italy). A unique batch of raw meat was purchased for the trials, frozen at -20°C and thawed every day. The complement was produced in one batch and was composed by rice flour, chickpeas flour, oat flakes, dry ground carrots, algae-derived Omega 3 fatty acids and mineral-vitamin complex. Chemical composition of the foods is showed in Table 1. The MD was formulated to cover macro and micro nutritional requirements according to NRC recommendations [17]. Daily feed amounts and relative macronutrients supplied from the diets are reported in Table 2. Dogs were fed once daily at around 8:00 am. During the trial, the control group received the same amount of RD, which was also used as Control Diet (CD), while experimental diet was prepared by mixing the complement with the meat and adding water up to obtain a wet meal (approximatively, the ratio between water and complement was 2:1 w/w) and readily offered to the dogs. Experimental design Dogs were randomly split in two groups of 4 individuals and allotted to experimental blocks. At the beginning of the trial (T0), one group received the MD and the other group continued to be fed with the CD for a fortnight (T14). After 14 days, the two groups were inverted, the Control group shifted to the MD and the other group shifted to the CD, for the following 14 days (T28). No transition period was applied to shift from the reference/ control to the mixed diet. Individual live weight was also recorded at T14 and T28. Samples collection Samples of faeces and blood were collected from each dog before the morning meal at the beginning of the study (T0), after 14 days (T14) before the change of diet and at the end of experimental period (T28). At each day of sampling, starting from 6:00 am the first stool evacuated from each dog was immediately and entirely collected with sterile gloves in hermetic sterile plastic bag. The plastic bags were immediately and entirely immersed in liquid nitrogen to frozen the stools until they arrived to the lab, then stored at -80°C for the analysis. For the analysis, frozen stools were carefully cleaned from external contaminants with a sterile blade, then ground in a sterilized mortar under liquid nitrogen to avoid thawing and mixed. Two aliquots were obtained, placed in sterile plastic tube and stored at -80°C for fatty acids and lactate or DNA analysis. From the cephalic vein, about 4 ml blood were collected for each sampling time, immediately divided into two aliquots, one with K3-EDTA and one without anticoagulant, stored at 8°C until they arrived to the lab. Plasma and serum were separated by centrifugation for 25 min at 3250 rpm hence stored in 2.5 ml tubes at -20°C until biochemical analysis. Blood analysis Plasma and serum were sent under dry ice at the end of the trial to the certified laboratory of the Istituto Zooprofilattico delle Venezie (Legnaro, Padova, Italy) for biochemical analysis. Faecal DNA extraction, sequencing and taxonomic annotation Prior to DNA extraction, faecal samples (150 mg) were washed following a 3-step washing procedure as described by Fortin et al. [18]. Microbial DNA of the faeces was extracted from 150 mg samples using a Faecal DNA MiniPrep kit (Zymo Research; Irvine, CA, USA) following the manufacturer's instructions, including a bead beating step. Pre-amplification concentration of DNA in the samples was measured with a Nanodrop 3300 Spectrophotometer (Thermo Scientific; Waltham, MA, USA) and confirmed with a Qubit™ 3 Fluorometer (Thermo Scientific; Waltham, MA, USA) resulting in satisfactory quality and quantity. (219 ± 63 ng/μl, average 260/280 and 260/230 ratios 1.8 and 1.7, respectively). DNA was fragmented and 16SrRNA V3 and V4 regions amplified for library preparation, adding also the Indexes for sequencing, using a Nextera DNA Library Prep kit (Illumina; San Diego, CA, USA) following manufacturer's instructions. 16S Amplicon PCR Forward Primer = 5' TCGTCGGCAG CGTCAGATGT GTATAAGAGA CAG CCTACGG GNGGCWGCAG 16S Amplicon PCR Reverse Primer = 5' and GTCTCGTGGG CTCGGAGATG TGTATAAGAG ACAGGACTAC HVGGGTATCT AAT CC were used [19]. Around 460 bp amplicons were then sequenced with a MiSeq (Illumina; San Diego, CA, USA) in 2×300 paired-end mode following the standard procedures. Sequenced reads that passed the quality check (Phred score ≥30) were then annotated for 16S rRNA taxonomic classification using the Ribosomal Database Project (RDP) Classifier, a Bayesian classifier developed to provide rapid taxonomic positioning based on rRNA sequence data [20]. The algorithm is a high-performance implementation of the RDP classifier described in Cole et al [21]. Data were lastly parsed and collected using a home prepared perl script (Additional file 1: Table S1). Faecal score, pH, lactate and fatty acids analysis Right after evacuation, the stools were assigned a faecal quality score using a 5-points visual scale with 0.5 score interval ranging from 1 (hard and dry faeces) to 5 (liquid diarrhoea) [22]. Scores of 2-3 were considered the optimum, consisting in firm but not dry stool, with moderate segmentation visible, holding form when picked up leaving none or minimal residual on the ground. Statistical analysis At each taxonomic level sequences for each sample were normalized to ‰ abundance profiles. Taxa with abundance lower than 10‰ [23] in more than 16 samples out of 24 were excluded from the statistical analysis. Shannon α-biodiversity (H') index was also calculated at the genus level including all taxa according to the equation H' = -sum(P i * ln P i ), where P i = frequency of every genus within the sample. Evenness index (J) was calculated as J = H'/ln S, where S = total number of genera within each sample. The blood and faecal variables and metagenomics abundance were analyzed applying a Linear Mixed Model. The model included the fixed effect of time of sampling (3 levels, T0, T14 and T28), treatment (3 levels, RD, MD, CD), the interaction of time of sampling X treatment and the dog as random factor repeated over the time of sampling. Orthogonal contrasts of T14 Vs T0 and T28 Vs T0 were calculated and Least Significant Difference statistics with Bonferroni multiple testing correction on estimated marginal means were used as significance test. Pearson correlations between relative abundance of microbial families or genera and proportions of SCFAs and lactate were calculated. All statistical analysis were performed with SPSS Statistic [24]. BCS and blood biochemistry Dietary treatment did not affect significantly the body weight, which was equal to 30.1 ± 2.7 with CD and 29.9 ± 2.8 with MD, nor the BCS. For blood biochemistry (Additional file 2: Table S2), only plasma glucose was affected by MD (P < 0.05) and time of sampling (P < 0.05). The other parameters did not change significantly between groups. Metagenome sequencing and taxonomic annotation An average of 337,224 ± 177,407 raw sequences were obtained for the samples. After the quality check, a mean of 362,292 ± 247,167, 297,745 ± 89,305 and 241,920 ± 50,365 sequences were available for taxonomic annotation for the RD, the MD and the CD groups, respectively. The bacterial annotations, the relative abundance across the dietetic treatments and the results of the statistical analysis are reported for the taxonomic levels of the Phylum, Family and Genus. Dietary treatments had a significant effect on the phylum Proteobacteria (P < 0.05), which was higher in the MD compared to the RD (Table 3). An increased abundance was measured in the MD Vs RD also for the phyla Actinobacteria and Fusobacteria (P < 0.05). No difference were observed between CD and RD. At the family taxonomic level (Table 4), several bacterial families were significantly increased in the MD group. The effects of treatment and of the contrast MD Vs RD were significant for Streptococcaceae, Clostridiaceae 1 and Enterobacteriaceae. For the Bacteroidaceae, Veillonellaceae and Coriobacteriaceae, significant effects were observed only for the MD Vs RD contrasts. A marked decrease (P < 0.01) of the Lactobacillaceae was observed as consequence of treatment and for MD Vs RD diets. Also the Prevotellaceae significantly changed across the diets (P < 0.05), being lower in MD and higher in CD, compared with the RD. The abundance of the genera Clostridium XI, Bacteroides (P < 0.05), Fusobacterium, Clostridium XIX, Cetobacterium, Escherichia/Sighella and Lactococcus was significantly (P < 0.01) higher in MD diet compared to RD ( Fig. 1; Additional file 3: Table S3). In the MD group, a marked decreased of the genera Lactobacillus and Paralactobacillus (P < 0.01) was observed. For the genus Prevotella a significant effect of the treatment was shown (P < 0.05), with a lower abundance in the MD group. The effects of time and time X treatment were not significant at the Phylum (Table 3) or at the Family level (Table 4). At the Genus level, the relative abundance of Clostridium XI (P < 0.05) and Turicibacter (P < 0.01) significantly changed with time, and for Sutterella a significant effect was also observed for treatment (P < 0.01) and time X treatment interaction (P < 0.05) (Additional file 3: Table S3). The Shannon biodiversity Index (H') at the genus level (Fig. 2a) showed a significant increase for the MD (3.31 ± 0.15) group in comparison to the RD group (2.92 ± 0.31; P < 0.05). It returned close to the RD in the CD treatment (3.15 ± 0.09). The same differences were observed also for the Evenness Index (J, Fig. 2b). In particular, the J value of the RD group was significantly lower than the MD and CD groups (P < 0.05). Faecal Score and end products of fermentation The MD treatment significantly (P < 0.05) lowered the Faecal Score and increased the lactic acid concentration in the feces in comparison to the RD treatment (P < 0.01) ( Fig. 3a and b and Additional file 4: Table S4). A numerical increment, even though not significant (P = 0.081), was also observed for the proportion of butyrate in MD treatment. In comparison with the RD treatment, acetic acid was lower (P < 0.05) for MD and CD treatments, although for CD the concentration was closer to RD. No significant variations of molar content and proportion of the other SCFAs were observed. Correlations between metagenome, lactate and SCFAs proportions Correlations analysis showed several significant effects between microbiome and SCFAs or lactate (Table 5). Acetate was negatively correlated with the genus Escherichia/Shigella (P < 0.01), belonging to the phylum Proteobacteria, with family Lachnospiraceae (P < 0.05) and the genus Megamonas (P < 0.01), belonging to the phylum Firmicutes. Positive correlations with butyrate production (P < 0.05) were calculated for the Lachnospiraceae and its genus Blautia, for the genus Peptococcus (phylum Firmicutes) and for the family Coriobacteriaceae (phylum Actinobacteria). Positive correlations with lactate production were observed for the genera Megamonas (P < 0.05) and Escherichia/Shigella (P < 0.01), for the family Enterococcaceae (P < 0.05) and the genus Lactococcus (P < 0.01) (phylum Firmicutes) and the genus Clostridium XIX (P < 0.05) (phylum Fusobacteria). The genera Lactobacillus and Paralactobacillus in this study resulted negatively correlated with lactate (P < 0.01). For the SCFAs isoforms, positive correlations were calculated for isovalerate with the genus Turicibacter (P < 0.01) and for isobutyrate with the genera Blautia and Sutterella (P < 0.05). Discussion The influence of diet compositions on the modification of gut microbiome in dogs has been recently reviewed by Deng and Swanson [4]. Many of the reported studies concern changes in nutrients content, as proteins or fibers in dry extruded formulations, but only one study [25] investigated the composition of faecal microbiome in diets containing beef or chicken raw meats; however, also in this study a comparison with extruded kibbles was not carried out. The interest for raw meat-based diets has been increasing in the last years [26], since the nutritional properties of raw meats are thought higher than after extrusion [27]. According to Schlesinger and Joffe [28], the risks associated with feeding raw meat is controversial, and was reported only by in testimonials, case series or limited cohort and case-controlled studies. Our study is the first attempt to compare, in healthy dogs, a complete diet (MD), consisting of vegetable sources supplemented with vitamins and minerals and raw beef meat, with a commercial extruded diet (RD and CD). In our study, the diets were compared in terms of blood biochemistry, faecal quality, end products of fermentation and microbiome. To limit the variability of the meat source, in this study all dogs were offered only high grade skeletal muscle meat, originating from a single batch. The chemical composition reported in Table 1 was the average of 4 analysis. Published studies report adaptation periods varying from 10 days [11], 2 weeks [10, 25] to 4 weeks [9]. According to the results of these studies, and to avoid modifications due to unexpected environmental changes we applied a 14 d interval between the collection of samples. The main phyla detected in the three diets (Table 3) corresponded to those reported for healthy dogs using other sequencing techniques [5,6,12,29], but in our study a higher abundance of Firmicutes and lower abundance of Bacteroidetes were observed. Other studies report a large variability in the prevalence of these phyla, often with smaller abundance of Firmicutes and a greater prevalence of Bacteroidetes and Fusobacteria [14,30]. Hence, a straight comparison of microbiome compositions with these and other published results appears In the present study, MD diet significantly changed the abundance of the phyla Actinobacteria, Fusobacteria and Proteobacteria. However, at a phylum taxonomic level is difficult to understand the relationship between microbial communities and fermentation products and dietary regimes. More evident was the effect of dietary shifts on the composition of microbial communities at the family taxonomic level. The inclusion of raw meat in the diet, together with the variation of composition and the physical form of MD, dramatically modified the abundance of the families Lactobacillaceae, Fusobacteriaceae, Coriobacteriaceae, Clostridiaceae 1, Enterobacteriaceae, Streptococcaceae and Enterococcaceae (Table 4). Moderate variations of diet do not seem to influence intestinal microbial communities. The inclusion of navy beans in a control diet of healthy dogs did not caused a shift in faecal microbiome after 4 weeks of dietary intervention study [9]. Also Panasevich et al. [12] found limited variations in the composition of faecal microbiome increasing the potato fiber in the diet from 0 to 6%. A decreased proportion of the family Coriobacteriaceae was observed by Suchodolski et al. [5] in dogs with inflammatory bowel disease (IBD) and other faecal dysbiosis in comparison to healthy subjects, and Xenoulis et al. [31] observed a significant increase of Enterobacteriaceae, mainly due to E. Coli sequences in IBD affected dogs. However, these authors did not find changes in the families Streptococcaceae, Enterococcaceae and Fusobacteriaceae. The comparison of the present results with previously published data suggests that a relevant shift of faecal microbiota in healthy dogs can be observed only as a consequence of profound dietary variations. The effect of the diets on microbial profile was more evident at the genus taxonomic level (Additional file 3: Table S3 and Fig. 1) and other significant variations for genera not included in the families significantly affected (Table 4) were found. Other than Lactobacillus and Paralactobacillus (family Lactobacillaceae), Fusobacterium, Clostridium XIX and Cetobacterium (family Fusobacteriaceae), Escherichia/Shigella (family Enterobacteriaceae), Lactococcus (family Streptococcaceae), diet significantly influenced the genera Clostridium XI, Bacteroides and Megamonas, but not their respective families. Of note, the relative abundance of these families and genera in the CD diet returned quite close to that of RD diet, further suggesting a dietary signature for microbiome as indicated also by Beloshapka et al. [25] and Hang et al. [32]. If the variations of microbiome observed in this study were associated or not to a better gut health is not easy to assess, but the increase of H' in the MD diet, due to a better distribution of evenness J ( Fig. 2a and b), would indicate an enhancement of gut health. Lower H' and J in IBD affected dogs are reported by Suchodolski et al. [5,33]. According to Alcock et al. [34], lower biodiversity of intestinal microbiome is associated to a higher microbial fitness, which is detrimental for host fitness, leading in mice and humans to unhealthy eating behavior and obesity. The relationship between biodiversity and obesity was also observed in Beagle dogs by Park et al. [15]. In favor of a better gut health for the raw meat-based diet (MD), was the improvement of faecal score (Fig. 3a), which further indicated a better colonic health, as suggested by Gagnè et al. [35]. Moreover, from the visual appraisal of the faecal output, which was observed to be reduced in the MD diet, a better apparent digestibility of the diet can be supposed, as also suggested by Beloshapka et al. [27] for dogs fed with raw meat. As a further evaluation of microbiome community in the gut, we measured faecal SCFAs and lactate, since their concentration depends upon the colonic fermentation of the nutrients by microorganisms [36,37]. Dogs can digest starch in the small intestine [38] and bacteria can ferment undigested starch and others complex carbohydrates in the large intestine producing SCFAs. Even though the contribution of these end products of fermentation for the energy balance of the host is considered marginal in dogs [37], the SCFAs are important growth factors for intestinal cells and for gut health [39], having also immunoregulatory T cells activity [40]. The average content of faecal SCFAs ranged from 195.7 to 216.9 μmol/g, a level generally found in animal fed low fiber diets [27,41]. Amount, type and physical form of the fiber substrates affect the extent and the end-products of the fermentation [12]. However, in our trial total SCFAs were not affected by diet (Additional file 4: Table S4) even though the amount of crude fiber supplied with RD and CD the diets was higher than that provided by MD diet (Table 2). This can be the combined result of a reduced fermentation of the fiber after extrusion together with an increase of the intestinal transit time of RD and CD diet due to the higher crude fiber content. Overall, SCFAs profile measured in the present research resulted similar to that reported for healthy dogs in a previous study [41]. Correlations analysis between the abundance with specific families and genera with SCFAs and lactate proportion in the faeces (Table 5) confirmed a statistical, although not biochemically proven, association of some microbial taxa to the end products of fermentation. However, caution must be taken before assessing a direct link between one microbial taxa and end products of fermentation. Gut microbial ecosystem is complex, presenting a mixture of common and divergent interests, with competition or mutual benefits, in a way that some product of fermentation from one microbial strain can be the substrate for another strain, sometimes occupying the same ecological niche [34]. There was a positive correlation between members of the family Coriobacteriaceae and with the family Lachnospiraceae (notably the genera Blautia and Peptococcus) with butyrate, supporting a positive role of these microbes on gut health. Butyrate is an essential substrate for cells of intestinal mucosa [37,42] and the increase of its content in gut can influence other physiological effect at a whole organism level [42,43]. Another very interesting correlation was calculated for the genus Megamonas, since other than increasing faecal butyrate also caused a shift between acetate and lactate, with a positive correlation with this latter acid. Megamonas, a predominant genus of the family Veillonellacee, is reported to increase in the faeces of dogs fed with diet supplemented with inulin [25] or fructooligosaccharides [44], suggesting a potential impact of this bacteria on gastrointestinal health. The specific role of acetate remains poorly known and still under investigation in mammals. Acetate in dogs is produced by the fermentation of fiber [11] or from undigested protein in the colon [45]. In humans and in mice the increase of acetate produced from Bifidobacterium has been reported to protect the host from enteropathogenic infection via carbohydrate transporters [46]. In the present study we did not observed a significant variation of acetate concentration between CD and MD, neither a changed abundance of Bifidobacteria consequent to the experimental diet. Acetate has also been reported to stimulate insulin secretion and related changes associated with obesity and metabolic syndrome [47]. In mice, Frost et al., [48] observed a reduction of appetite through the interaction with the central nervous system after peripheral administration of acetate, without differences in plasma glucose, peptide YY (the anorexogenic gut hormone PYY) and GLP-1 (glucagon-like peptide-1). In dogs, Bosch et al. [49] reported a reduction of voluntary intake associated to higher acetate in faeces, but they did not observe any effect in the postprandial plasma glucose, PYY, GLP-1 and ghrelin responses. These conflicting evidences deserve further studies to clarify the physiological role of acetate, especially in dogs. The importance to consider the microbial community as a whole is evident from the concurrent effect on lactate proportion of Escherichia/Shigella (P < 0.01), Enterococcaceae (P < 0.05), Clostridium XIX (P < 0.05) and, especially, of omeolactic bacteria Paralactobacillus, Lactobacillus and Lactococcus. Microbes of the family Lactobacillacae are generally associated with higher lactate, but in our dietary intervention study Lactobacillus almost disappeared in the raw met diet (MD). Instead, Lactococcus, another lactic acid genus poorly observed in other studies [10,12,29], strongly increased in the MD diet, probably occupying the ecological niche that in the extruded foods (RD and CD diets) are usually a more suitable environment for Lactobacillacae. MD diet supplied less, but higher digestible starch compared with the RD diet (Table 2, carbohydrates by difference), and in the complement the starch from rice and chickpeas was thermal treated and highly gelatinized, being probably more accessible for fermentations. Since Bazolli et al. [36] reported that an increase of lactate in faeces can be related to carbohydrates escaping duodenal digestion, the observed increase of lactate in MD diet was probably the results of the variation of microbial community. It has been shown that excessive concentration of lactate leads to a higher osmotic pressure in the intestinal lumen with consequent increase of faecal volume, moisture content and subsequent poor faecal quality [50,51]. In our study, only the molar proportion of lactate changed (Fig. 3), without a significant difference in the total amount of SCFAs and faecal pH. The concomitant reduction of the Faecal Score would indicate that the increase of lactate was related with a better gut health, as reported by Swanson et al., [37]. Furthermore, Felix et al. [52] observed that faecal lactate is related with lactic acid-producing microorganisms, which can inhibit the development of proteolytic bacteria, in the gut of the dogs. Conclusions The studies on the composition and variation of faecal microbiome in healthy dogs offer a promising opportunity to better understand the factors affecting the microbial communities and the end products of fermentations, but further efforts from the scientific community are required to clarify if a reference compositions for healthy dogs can be assessed. From our results and from the comparison with existing scientific evidences, it appears that the modification of microbiome can be attained when a considerable variation of dietary regimes is applied. Specifically, the administration of highly digestible feed, combining fresh meat with readily fermentable substrates, promoted a more balanced growth of bacterial communities and a positive change in some of the readouts of healthy gut functions. Additional files Additional file 1: Table S1. Script used for parsing and collecting metagenomic data. (XLS 28 kb) Additional file 2: Table S2. Blood biochemistry of dogs fed a Reference diet (RF), Mixed diet (MD) or Control diet (CD). Means, standard deviations and statistical effects are reported for the three diets. (XLSX 12 kb)
Identifying the mechanism underlying treatment failure for Salmonella Paratyphi A infection using next-generation sequencing – a case report Background Salmonella is a notorious pathogen that causes gastroenteritis in humans and the emergence of resistance to third-generation cephalosporins and azithromycin have raised concern. There has been rare case of Salmonella Paratyphi A infection accompanied by spondylitis. Here, we report a case of initial antibiotic treatment failure in a Korean man with Salmonella Paratyphi A infection and conducted next-generation sequencing (NGS) to determine the cause of failure of initial treatment for Salmonella Paratyphi A infection. Case presentation A 70-year-old man was admitted to Chosun University Hospital with reported consistent low back pain with a history of having 5 days of chills and fever in another hospital a month ago. He was administered ceftriaxone (2 g daily) for 18 days including initial treatment to cover Salmonella enterica. The antimicrobial susceptibility test using MIC plate, found that the identified organism was resistant to ciprofloxacin and nalidixic acid. Moreover, the Salmonella Paratyphi A isolates were found to have an MIC > 16 mg/L for azithromycin, as he had resistance to both azithromycin and nalidixic acid, the treatment was switched to a combination of ciprofloxacin and cefotaxime. We carried out next-generation sequencing (NGS) to determine the cause of failure of initial treatment for Salmonella Paratyphi A infection. NGS showed that the amino acid substitution GyrA S83F and the expression of multiple RNA-family efflux pumps led to a high-level resistance to quinolone. No genes related to ceftriaxone resistance, such as CTX-M, CMY-2, or other extended-spectrum beta-lactamases were identified in Salmonella enterica Paratyphi A using NGS. The GyrA S83F mutation and the expression of multiple RNA-family efflux pumps may have contributed to the treatment failure of ceftriaxone, even though the MIC of the isolate to ceftriaxone was less than 1. Conclusion This case involved a Salmonella Paratyphi A infection accompanied by spondylitis. To our knowledge, this is the first report to elucidate the mechanism underlying antimicrobial resistance using NGS. Background Salmonella is a notorious pathogen that causes gastroenteritis in humans, and 94 million cases of salmonellosis are reported globally every year [1]. Infections are systemic, characterized by fever and gastrointestinal symptoms, and are associated with significant morbidity [2]. Death can occur, especially if appropriate antimicrobial therapy is delayed [3]. Fluoroquinolones became the first-line antimicrobial therapy following their introduction in the 1980s, and they were initially associated with rapid fever clearance and low rates of both relapse and chronic faecal carriage [4]. However, reduced ciprofloxacin susceptibility (MIC 0.06-0.25 mg/L) has become increasingly prevalent in Salmonella enterica Typhi and Salmonella enterica Paratyphi A, and has been associated with clinical failure [5,6]. Fluoroquinolone-resistant strains of Salmonella Typhi and Salmonella Paratyphi A have recently emerged in tropical and subtropical regions of the world, such as southeast Asia and Africa [7]. In the UK, > 90% of Salmonella Typhi and Salmonella Paratyphi A isolates acquired from India between 2006 and 2007 were found to be nalidixic acid-resistant [8]. Alternative antimicrobials, including third-generation cephalosporins and azithromycin, are now being increasingly used as first-line therapies [9]. Reports of emergence of resistance to third-generation cephalosporins and azithromycin have raised concern amongst clinicians [10,11]. A case of clinical failure under azithromycin treatment in a case of bacteremia due to Salmonella enterica Paratyphi A was reported in 2014 [7]. Here, we report a case of initial antibiotic treatment failure in a Korean man with Salmonella Paratyphi A infection and conducted next-generation sequencing (NGS) to determine the cause of failure of initial treatment for Salmonella Paratyphi A infection. Case presentation In January 2018, a 70-year-old man residing in South Korea was admitted to Chosun University Hospital with reported consistent low back pain. At first, he had been admitted to a local hospital on 24 November 2017, a month before visiting Chosun University Hospital with a history of 5 days of chills and fever. In the local hospital, in view of the possibility of acute pyelonephritis, he was first treated with intravenous ceftriaxone at a dosage of 2 g daily. Two days after admission, back pain started. During antibiotic treatment, blood cultures taken on admission yielded Salmonella enterica. He remained on ceftriaxone (2 g daily) for 18 days including initial treatment to cover S. enterica. Upon follow-up blood culture, no bacteria were detected on the 8th and 25th days after starting treatment, and the patient no longer had fever; he was subsequently discharged from the local hospital on 19 December 2017. However, he consistently suffered from lower back pain, nausea, and vomiting; he was re-admitted to the same local hospital 9 days after his discharge. When he was re-admitted to the local hospital again on 30 December 2017, magnetic resonance imaging (MRI) was performed and L1 spondylitis was demonstrated. MRI revealed whole bone marrow oedema with endplate lytic changes in the L1 body and focal marrow oedema in the upper endplate of L2 bodies. Additionally, mild destruction of intervertebral disc at L1-2 was shown. These findings were considered to be indicative of pyogenic spondylitis ( Fig. 1a, b, c). He was empirically treated with cefazolin (1 g, 3 times a day) for 10 days to cover the possibility of Staphylococcus aureus infection, which is a common cause of pyogenic spondylitis. Then, blood cultures were tested and yielded S. enterica again. Finally, he was transferred to Chosun University Hospital, and bone biopsy of L spine was performed on 3 January 2018. He had no fever, and the initial blood test was generally unremarkable except that his erythrocyte sedimentation rate (ESR) level was 108 mm/h. After 7 days, the biopsy results of bone and blood cultures were positive for Salmonella enterica. After the bacteria were identified, with suspicions of metastatic spondylitis, he was treated with ciprofloxacin for 13 days. Computed tomography (CT) was performed for further evaluation. CT scans revealed L1-2 spondylitis spreading to the left psoas muscle. Fluorodeoxyglucose positron emission tomography (FDG PET) was performed to detect clinically undetected diseases in different sites, and it showed hypermetabolism and bone destruction in L1 and L2 vertebral bodies, as well as mild hypermetabolism in the right facet joint between L4 and L5 vertebral bodies. The antimicrobial susceptibility test using MIC plate by the Chosun University Hospital and Health and Environment Research Institute of Gwangju City found that the identified organism was resistant to ciprofloxacin and nalidixic acid (Table 1). Since susceptibility tests for macrolides could not be carried out due to unavailability of the disc, the specimen was sent for testing to Korea Centers for Disease Control and Prevention. While waiting for results, the antibiotic used in the treatment of the patient was changed from ciprofloxacin to azithromycin due to the identified resistance to ciprofloxacin and nalidixic acid. While he was treated with azithromycin for 15 days, C-reactive ptrotein (CRP) gradually decreased to the normal value of 0.09 mg/dL; ESR level decreased as well, but not to normal levels (CRP: 0.09 mg/dL, ESR: 44 mm/h on the 20th day of admission). He was discharged and followed up through an outpatient clinic. Seven days later, the Salmonella Paratyphi A isolates were found to have an MIC > 16 mg/L for azithromycin. As he had resistance to both azithromycin and nalidixic acid, the treatment was switched to a combination of ciprofloxacin and cefotaxime. While he was treated with combination therapy for 2 months, his clinical symptoms such as back pain reduced, and the ESR level gradually decreased to normal (ESR: 20 mm/h on the 64th day of combination therapy). We carried out NGS to determine the cause of failure of the initial treatment for Salmonella Paratyphi A infection and elucidate the mechanism underlying antimicrobial resistance in Salmonella Paratyphi A. Purified genomic DNA was randomly sheared to yield DNA fragments approximately 350 bp in size using a Covaris S2 Ultrasonicator. Library preparation was performed using the Illumina TruSeq DNA PCR-free Preparation Kit following the manufacturer's instructions. Adaptor enrichments were performed using PCR according to the manufacturer's instructions. The final library size and quality were evaluated electrophoretically with an Agilent High Sensitivity DNA Kit. The 100-bp paired-end reads were sequenced on the Illumina HiSeq 2500 platform. Further image analysis and base calling were performed with RTA 2.7.3 (Real Time Analysis) and bcl2fastq v2.17.1.14. Resistance genes were identified using Resistance Gene Identifier (RGI) in the Comprehensive Antibiotic Resistance Database (CARD, http://arpcard.mcmaster.ca) using predicted peptide sequences. All sequences were run through all databases in CARD with a selected threshold of ID = 98.00%. The clinical isolates were identified with a VITEK II automated system (bioMérieux, Marcy-l'Etoile, France). Tests for antimicrobial susceptibility including MIC were performed with the VITEK II system. In Chosun University Hospital, we cultured the blood collected on 30 December 2017 in the local hospital and conducted antimicrobial susceptibility tests using Clinical and Laboratory Standards Institute (CLSI) guideline (Table 1). The antimicrobial susceptibility test performed by the Health and Environment Research Institute of Gwangju City showed that the identified organism from the closed pus taken from bone biopsy when the patient was admitted to Chosun University Hospital on 3 January 2018 exhibited resistance to ciprofloxacin (MIC = 1 mg/L) and nalidixic acid (MIC> 128 mg/L) ( Table 1). The antibiotic resistance test results for the blood samples obtained on December 30, 2017 and the biopsy samples obtained on January 3, 2018 were the same. The isolate also presented resistance to macrolides (MIC> 16 mg/L) in a test performed by the Korea Centers for Disease Control and Prevention. We carried out NGS to determine the cause of failure of the initial treatment for Salmonella Paratyphi A infection and elucidate the mechanism underlying antimicrobial resistance in Salmonella Paratyphi A. The isolate used for NGS was taken in the local hospital when he was readmitted in 30 December 2017. Next-generation sequencing results for Salmonella Paratyphi A with a selected threshold of ID = 98.00% showed that there are antimicrobial resistance gene families present in the isolate, including resistance-nodulation-cell division (RND) antibiotics efflux pumps such as mdsC, CRP, and sdiA; gyrA associated with fluoroquinolone resistance such as Salmonella enterica gyrA; and MATE transporters such as MdtK and AAC (6′)-ly (Table 2). Based on the results, the most likely cause of treatment failure was a RND antibiotic efflux pump. Meanwhile azithromycin resistance genes such as mph and mef were not identified ( Table 2). There were also no resistance genes related to ceftriaxone, such as CTX-M, CMY-2, or other extended-spectrum beta-lactamases. Discussion and conclusion In 2008, a case of L3/4 vertebral osteomyelitis due to Salmonella Paratyphi A was first reported with bacteriological confirmation in Dubai, United Arab Emirates [12]; however, it was not described if the isolate exhibited resistance to nalidixic acid and macrolides. In this case, we carried out NGS to elucidate the mechanism underlying antibiotic resistance in Salmonella Paratyphi A due to the emerging development of osteomyelitis during intravenous ceftriaxone treatment. As of late, there have been no studies involving the use of NGS in identifying the mechanism underlying treatment failure for Salmonella Paratyphi A. A recent study reported that 40% of Salmonella isolates in Chennai, India have an MIC of > 0.5 uL/mL against ceftriaxone [13]. There is a wide variety of serotypes and susceptibility results among Salmonella spp. isolated from clinical specimens in Korea [14]. The three most common Salmonella serotypes are Enteritidis, Typhimurium, and Infantis. These Salmonella strains had resistance rates of 38.7% to ampicillin, 23.0% to chloramphenicol, 8.2% to cefotaxime, 8.6% to ceftriaxone, and 6.3% to trimethoprim-sulfamethoxazole [14]. Another study showed the same result, where the major serotypes isolated in Jeollanam-do, Korea, were Salmonella Enteritidis and Salmonella Typhimurium, where a total of 22 different serotypes were identified, and the major serotypes were Salmonella Enteritidis (116 strains, 42.0%) and Salmonella Typhimurium (60 strains, 21.7%). The highest resistance was observed in response to nalidixic acid (43.4%), followed by ampicillin (40.5%) and tetracycline (31.6%) [15]. Resistance to nalidixic acid was detected in 81.0% of Salmonella Enteritidis isolates. Multidrug resistance was detected in 43.3% of Salmonella spp. Salmonella Enteritidis and Salmonella Typhimurium presented the highest resistance (98.3%) and multidrug resistance (73.3%) rates, respectively [15]. A recent report similar to our case showed that a patient infected with Salmonella Paratyphi A was treated with ceftriaxone, but his symptoms remained; therefore, his treatment was changed to azithromycin [7]. Even though Table 2). Based on our NGS results, azithromycin resistance genes such as mph and mef were not found; however, it was suggested that the mechanism of resistance to azithromycin was due to a RND antibiotic efflux pump. In our previous study, a combination of ciprofloxacin and cefotaxime showed synergistic effects against nalidixic acid-resistant Salmonella Paratyphi A and B. This combination appears to be more effective than monotherapy and may help reduce the chances that fluoroquinolone-resistant mutants will emerge in patients with severe typhoid fever [16,17]. In this case, we treated with ciprofloxacin and cefotaxime, and the patient's clinical features including back pain and ESR level decreased. Olaquindox-resistant isolates were found to contain the gene combination oqxAB, which encodes an RND family efflux pump, confers resistance to olaquindox quinolones and chloramphenicol, and reduces susceptibility to other antibiotics [18]. OqxAB, a plasmid-mediated RND efflux pump conferring resistance to multiple antibiotics, was found in Salmonella isolates recovered from food samples. The overall OqxAB-positive rate of Salmonella typhimurium strains was 29% (159 out of 546 isolates), and the yearly rates were 0, 13, 26, 32, 36, 39, and 42% during the years 2005 to 2011, respectively. OqxAB was also found to be associated with multidrug resistance in S. typhimurium isolates from Hong Kong and from the Infectious Disease Prevention and Control, National Institute for Communicable Disease Control and Prevention (ICDC), Chinese Center for Disease Control and Prevention, Beijing, China. Among the S.typhimurium isolates of the OqxAB-positive group, 94% (Hong Kong) and 98% (ICDC) were resistant to ciprofloxacin (MIC = 2 mg/L); the corresponding resistance rate in the OqxAB-negative S. typhimurium isolates from Hong Kong and ICDC was only 11% [19]. Our NGS study showed that expression of multiple RND family efflux pumps such as MdsC, CRP, and SdiA, which may be related to quinolone resistance, may have been responsible for the failure of ceftriaxone treatment ( Table 2). The upregulation of endogenous SdeXY-HasF-mediated efflux has been reported to be associated with tigecycline resistance in Serratia marcescens, along with increases in MIC for tetracycline, ciprofloxacin, and cefpirome [20]. The overexpression of the BmeB efflux pump has also been reported to cause low-to-intermediate-level clinically relevant fluoroquinolone resistance and can be coupled with GyrA substitutions to cause high-level fluoroquinolone resistance. Finally, it also contributes to high-level clinically relevant resistance to beta-lactams [21]. Our case also showed that both the amino acid substitution GyrA S83F and the expression of multiple RND family efflux pumps led to high-level resistance to quinolone. No resistance genes in Salmonella Paratyphi A related to ceftriaxone, such as CTX-M, CMY-2, or other extended-spectrum beta-lactamases were identified using NGS. However, there was no response or even progression to vertebral osteomyelitis to treatment with third-generation cephalosporins after 18 days of the initial treatment. The GyrA S83F substitution and the expression of multiple RND family efflux pumps may have contributed to the failure of treatment with ceftriaxone, even though the MIC of the isolate to ceftriaxone was less than 1. In our case, there was a possibility that the early onset of metastatic spondylitis accounted for the failure of treatment with a third-generation cephalosporin because the treatment duration was not long enough. However, further studies on RND antibiotic efflux pumps are necessary to truly identify it as the cause of third-generation cephalosporin treatment failure. In conclusion, this case involved a Salmonella Paratyphi A infection accompanied by spondylitis. To our knowledge, this is the first report to elucidate the mechanism underlying antimicrobial resistance using NGS.
DEVELOPMENT OF LIFE EXPECTANCY IN THE CZECH REPUBLIC IN YEARS 1920-2010 WITH AN OUTLOOK TO 2050 At present the majority of advanced countries are dealing with the problem of the ageing of the population. The Czech Republic is no exception. Demographic ageing is caused by the fact that mortality is dropping, especially infant mortality, and this expectation of life at birth. At the same time the birth rate is declining and subsequently total fertility rate drops below the preservation level of simple reproduction, which means that there are less children and more persons in particular in the older and oldest age-groups. It is very important to realise that the changes in the level of mortality bring with them positive impacts in lengthening of life expectancy on the one hand, but on the other hand, there is signifi cant demographic ageing of the population. In this contribution we would like to show how the life expectancy has developed in the Czech Republic in a historical context and how it might develop in the coming years. For professionals the application of the Lee-Carter method will certainly be interesting – this is a method commonly used in the world by demographers and actuaries for modelling the future development of mortality and it is also the basic method used for stochastic demographic projections. Introduction Currently most developed countries are facing the problem of a population ageing. The Czech Republic is no exception. Demographic ageing is due to a reduction in mortality, above all mortality in older age. However, in the past falling infant mortality at birth signifi cantly contributed to prolonging life expectancy. Today this is at a very low level and therefore currently virtually does not contribute too much to prolonging life expectancy at birth. The birth rate is falling concurrently with mortality. There are fewer children and the number of people of an older or oldest age group is increasing. It is very important to be aware of the fact that changes in the level of mortality also bring a positive impact in the form of the prolongation of life expectancy on the one hand, whereas there is signifi cant demographic ageing of the population on the other hand. This has and will have an impact on the pension and health insurance sector and the insurance sector as a whole. Some pension funds worry whether they will be 126  PRAGUE ECONOMIC PAPERS, 1, 2013 able to meet their obligations in future because in view of the longer life expectancy in retirement age they will have to pay out pension for a longer period than they had originally expected. In this article we would like to show how life expectancy developed in the Czech Republic through the historical context and how it could develop in the years to come. The application of the Lee-Carter method will certainly interest experts as it is regularly applied by demographers and actuaries worldwide for modelling the future trend of mortality and it is the basic method for stochastic demographic projections. Why Is Life Expectancy Longer and What Are Its Consequences? Throughout the existence of humanity there has never been such a rapid increase in population numbers in the world as has occurred in about the last two hundred years. In the past the population grew slowly above all because the numbers of children born alive slightly exceeded the numbers of those born dead. The birth rate of the population served as the main way to regulate the number of people in society. As soon as the process of modernisation began major changes appeared in the level of crude death rate and crude birth rate. The population moved from the traditional regime of reproduction (where there was a high crude death rate and a crude birth rate) to a modern regime (where there was a low crude death rate and a crude birth rate). The population fi rstly gets younger and then there is demographic ageing. In terms of demographic behaviour there are changes which are called the "demographic revolution" or "demographic transition" in literature. These changes came about above all due to advances made in medicine, improvements in the level of the population's hygiene and rise in the standard of living. The rise of industrialisation and urbanisation increased the number of people living in cities. The fall in the crude death rate during the demographic revolution was caused above all by a fall in the mortality of newlyborns, infants, children and mothers. Just as more young people survive so the number of young people in the population increases. More young people live to parental age and therefore the number of births increases. The population may age in two ways. The fi rst of these is the relative ageing of the population caused by a fall in the birth rate and thereby there is a fall in the number of children in the population. The second type of ageing is absolute ageing caused by a fall in mortality and rise in life expectancy resulting in an increase in older people in the population. Whether the population gets younger or older depends on the nature of the age structure in the past and on the current situating concerning the birth rate and mortality. In the past the high mortality rate resulted in shorter expectation of life at birth. Expectation of life at birth in the long-term was hugely affected by the level of infant and child mortality as high maternal mortality also had an effect in the past. However, low expectation of life at birth does not mean that people cannot live to a greater age if they survive the period of high mortality in early life. Just as the living and economic conditions of the population improved so life expectancy rose continuously and more people survived to a higher age. The most signifi cant changes were recorded in modern societies in the second half of the 19 th century. At this time expectation of life at birth in the world disregarding the sex equally ranged on average at about 41 years of age and by 1900 it increased to 50 years of age (Rabušic, 1993). In 1930 the highest value of this indicator was 62 years and by 1965 it was almost 72 years disregarding the sex equally. In the Czech lands the value of expectation of life at birth ranged in years [1899][1900][1901][1902] for men at about 38.9 years and for women at about 41.7 years (Kučera, 1990). In 1930 it was about 54 years for men and 58 years for women, and in 1965 the value of expectation of life at birth was already at about 67 years for men and more than 73 years for women. Today expectation of life at birth disregarding the sex in the most developed countries is about 80 years (Figures 1 and 2). Several factors have contributed to the growth of expectation of life at birth. What fi rst applied was that a higher level of mortality was recorded in higher income areas. We can also ask ourselves the question how this could be possible for people with higher incomes and better housing, food, healthcare, etc. In the pre-industrial period in Europe these people lived in cities and their outskirts. What applied in this period was that in areas with a higher concentration of people the common cause of death was an infectious or parasitic disease. There were great epidemics at a time when hygiene measures in the cities were still inadequate to prevent the rapid spread of these diseases. Likewise immigrants did not have suffi cient immunity against diseases and often succumbed to them. According to historical demographers what applied at the time was that in the agricultural regions of Europe where there was no high concentration of population there was hope for greater life expectancy (45-50 years) than in the cities (30-33 years). As society gradually developed it was possible to make greater investments into healthcare. In the 19 th century hospitals began to be built in Europe and this is the time of the discoveries made by Koch and Pasteur. People began to realise the importance of the principles of hygiene in the fi ght against infectious diseases, built water mains as a source of safe drinking water, began building sewer systems and as a consequence the occurrence of greater epidemics decreased signifi cantly. People became urbanised and what came to apply was that there was longer life expectancy in the higher income areas. In the 1960s there were signifi cant advances in medicine. The length of life expectancy signifi cantly increased as did the number of people of older age groups. There was a signifi cant ageing of the population in developed countries due to very low mortality. A large part of the population of these countries began to live until the age of 65, and an ever greater number lived until the age of 85. The fall in mortality was marked above all in older age groups and just as specifi c mortality of old people fell so life expectancy rose. Source: data CZSO, data HMD, own calculations Many experts are asking the questions such as where the limit of human age is, how long we will live, whether people will live to a greater age in good health, whether the years of added active life will be without sickness or whether people will be sick and self-reliant. Will it be possible to slow down or delay ageing in some way? Longevity is inherited, it relies on lifestyle, diseases and so on. According to Wolf (1982) potential life expectancy is 90-100 years, some demographers even speak of death before the age of 90 as being premature (Kučera, 1990), while, on the other hand, they are becoming more aware of what signifi cant growth in the number of older people in the population means. There is the question of essential health and social care costs with all the economic, social and political repercussions. Life Expectancy in the Czech Republic in Years 1920-2010 Life expectancy is one of the most important demographic indicators as in one fi gure it provides aggregate information about specifi c mortalities according to age and sex. The most common information presented is the expectation of life at birth e 0 0 , which is the average number of years, which can live a newly born under the current mortality. Data for the calculation of life expectancy in the Czech Republic since 1920 can be found on the website of the Czech Statistical Offi ce. Separate mortality tables for men and women are taken into account. Figure 3 shows a comparison of expectation of life at birth and the precise age of 1 for men and women and the infant mortality quotient in years 1920-2010. Figure 3 Expectation of Life at Birth and Life Expectancy for One Year Old Men and Women and Quotient of Infant Mortality in 1920-2010 Source: data CZSO, own calculations It is generally known that the values of life expectancy are affected by the development of mortality in relevant years. This effect is refl ected in the form of noticeable fl uctuations and changes in the trend of monitored indicators and it is therefore appropriate to briefl y mention the development of mortality in the territory of today's Czech Republic. In the early 20 th century the population living in our territory had a high level of the crude death rate. The highest value was recorded in 1918 as a result of the Spanish fl u pandemic, this was followed in years 1923-1925 by its decline and further short-term increases were caused by the fl u epidemics in years 1927 and 1929. A noticeable decline in the crude death rate continued, with the exception of the period of the Second World War, until the early 1950s. It was assumed that due to the new medical discoveries, vaccination and prevention epidemics would disappear that to a great extent had contributed to high mortality and that mortality would fall even more. Epidemics really did disappear while pneumonia and fl u were no longer a direct cause of death, but mortality did not fall because diseases of the circulatory system and cancer causing death came to the fore. The time when mortality caused by these diseases grew and compensated the smaller number of deaths to infectious diseases lasted about 20 years. After this intermediate period, above all due to the improving living conditions and new medical methods of treatment, mortality in the Czech Republic began to fall from the mid-1980s. Figure 3 clearly shows that at the start of the monitored period the level of infant mortality was greatly unfavourable reaching almost 165 per mille. During the next few years infant mortality gradually fell sharply with some minor fl uctuations until the 1960s (with the exception of 1945, when a signifi cant fl uctuation was recorded caused by worse hygiene conditions at the end of the Second World War) to the value of about 20 per mille. In the 1970s the value of infant mortality was still about 22-24 dead children within one year per thousand children born alive. In view of improved hygiene conditions and the very good standard of paediatric care it fell from these values right down to 3-4 per mille, therefore almost seven times compared with 1960, and in 2010 it even fell down to the value of 2.7 per mille which is one of the lowest values throughout Europe and the lowest level in the history of the Czech Republic. Life expectancy is presented separately for each sex due to the different level of mortality in men and women. Figure 3 shows the clear difference between expectation of life at birth of men and women and also the difference between expectation of life at birth and life expectancy of a person at the precise age of 1. It may seem illogical that expectation of life at birth is shorter than life expectancy of a person at the age of 1. This is caused by expectation of life at birth being the average age of the deceased in a model population (in the so-called stationary population) and is therefore affected by infant mortality. A greater difference between expectation of life at birth and life expectancy of a precisely 1 year-old person could be recorded for girls only from 1981 and not until from 1985 for boys. The reason is the above described fall in infant mortality. This clearly shows that infant mortality and its very low level no longer provides much space for the future increase of expectation of life at birth. Data about the development of expectation of life at birth shows that since 2010 there has been a 58 % increase compared with 1920. The increase in life expectancy at birth of women is 62 % compared with 1920, of course since 2004 there has been a more rapid increase of life expectancy of women in the 68-75 age groups. This increase is evidently caused by better mortality ratios in women of higher age groups. Generally speaking life expectancy was prolonged in time above all with the fall in infant mortality. Currently its prolongation is affected above all by the fall in mortality in middle and higher age groups due to a fall in deaths from diseases of the circulatory system in men and women. Therefore as life expectancy of men and women is being prolonged, it is interesting to monitor the trend of life expectancy in selected ages. Figure 4 clearly shows male excess mortality, which means that in all age groups women have longer life expectancy than men. The lower the age the higher the difference is in time. Why men have an almost 7-year lower life expectancy than women at the start of life is a frequent question. This phenomenon is caused by a generally higher intensity of male mortality rates in all age groups. However, if men live to a greater age they have about the same life expectancy as women. An interesting outlook on the trend of life expectancy is from the outlook of the so-called paradox of life expectancy which is usually used only for comparing the life expectancy of a newly born person and a precisely one-year-old person. Looking in more detail at the life expectancy of person in childhood it can be seen that in the past even the life expectancy of a precisely 13-year-old person was longer than the life expectancy of a person at birth. For example, in 1920 life expectancy for a precisely 13-year-oldboy was higher than in a child born in the same year, and the same applied in 1920 for a precisely 12-year-old girl. It was caused by high infant and child mortality. Modelling and forecasting of life expectancy is part of the so-called population forecasts. These forecasts are prospective estimates of the future trend in the number, age and sexual structure of the population and is part of social and political forecasts because they play a crucial role in many social, political and economic decisions, such as in fi nancing the pension and healthcare systems, in the trend of the labour market or in education. Unlike standard economic forecasts they are to build a specifi c methodological mechanism due to which, in comparison with traditional forecasting methods, they can work in unusually long time horizons of 50 and more years. Population forecasts are most frequently based on deterministic models and are usually calculated in three different variants of future development -low, medium and high. In each variant the demographic factors are estimated on the basis of the extrapolation of actual values and include various preconditions for the development of individual components of population development. The application of deterministic models for forecasting is very popular because these models are very simple. However, the simplicity is balanced out by considerable disadvantages -for example the probability aspect is not considered here, the resulting size of the population occurs with the same probability in the entire time interval and perfect, but unrealistic relations are assumed between the demographic components (Härdle and Myšičková, 2009). Forecasts based on stochastic models of time series began to appear during the 1990s. The impulse was the development of the Lee-Carter method (Lee, Carter, 1992) for forecasting mortality. This method is based on the forecasting of historical time series of age specifi c mortality rates using standard procedures of time series analysis. After various modifi cations, such as Lee, Miller (2001), Brouhns, Denuit, Vermunt (2002), Renshaw, Haberman (2006), this method became the most commonly applied and today it is diffi cult to imagine any population forecast without its application (it is also used by the U.S. Census Bureau in its estimates of world population). Lee-Carter Method The Lee-Carter method is used for forecasting life expectancy. Its principle is relatively simple and involves the modelling of age specifi c mortality in time based on the following model ln(m x,t ) = α x + x t + ε x,t , x = 0, 1, …, ω -1, t = 1, 2, ..., T, where m x,t are specifi c mortality rates at age x and in time t, constituting ω-1xT by dimensional matrix M of specifi c mortality rates at age x and in time t, e α x is the average profi le of mortality at age x (irrespective of time t), x is the age specifi c constant which represents the speed of fl uctuation of mortality at a given age, as opposed to the total level of mortality t in time t ( t can also be described as the total mortality index) and ε x,t is white noise. The identifi cation model is ensured by conditions The LC method is a special case of the method of main components in which the data on the logarithms of age specifi c mortality rates is summed up only into one component t , which explains the biggest share of data variability resulting in the reduction of the dimensionality of matrix M. Lee and Carter (1992) where , xt S is the medium state of the population at x and in time t. This adjustment was modifi ed in various ways such as by Wilmoth (1993) or Lee and Miller (2001). The construction of the forecast is based on the fact that parameters ˆx  and ˆx  are constant in time and the total mortality index ˆt  , which is a one-dimensional time series, is modelled and forecast on the basis of the Box-Jenkins methodology (Box, Jenkins, 1970). ARIMA models are used to calculate forecast and subsequently using estimates of parameters ˆx  and ˆx  the forecast is obtained of age specifi c mortality rates from the relationship of However, if the Lee-Carter method is applied for estimating the life expectancy forecast a problem arises which involves gaining estimates of life expectancy separately for men and women. For this reason the LC method is applied separately to age specifi c mortality rates of men and women, and thereby estimates of mortality indexes are obtained also for each sex separately (ˆW t  , ˆM t  ). However if both indexes of the ARIMA models are used to obtain a forecast, their forecasts in time diverge unrealistically. This problem may be resolved with the help of the application of multidimensional time series methods to both mortality indexes. As shown by Darkiewicz, Hoedemakers (2004) in the case of England and Wales and Arlt, Arltová, Bašta, Langhamrová (2010) for the Czech Republic, Slovakia, Austria and Netherlands, a certain relationship may be identifi ed between the mortality indexes of men M t  and women W t  . Therefore the use of a cointegration analysis arises which helps to identify the presence of long-term and short-term relationships. The mortality cointegration indexes M t  and W t  may result in the error correction model (EC) (e.g. Arlt, Arltová, 2009), which will subsequently be used for their forecasting. The presence of a long-term relationship will also bind forecasts M t  and W t  and also the forecasts of male and female life expectancy. Using the Lee-Carter method for forecasting life expectancy in the Czech Republic The application of this method on data of the Czech Republic is based on one-year age specifi c mortality rates at age x = 0, 1, …, 100+ in years 1970-2008 (2008 was chosen so it would be possible to compare forecasts obtained by the LC method with Czech Statistical Offi ce projections, 2009), (Arltová, 2011). In view of the fact that life expectancy is calculated separately for men and women these age specifi c rates must be available for both sexes. Figure 6 and Figure 7, which are also called rainbow graphs (Hyndman, Shang, 2010), show one-year age specifi c male and female mortality rates and the source of these data was the Human Mortality Database (2008) (hereinafter the "HMD"). These graphs have a colour range structured according to rainbow colours so the oldest time series here are dark and the youngest are light. It is clear from these graphs that the mortality fell for both sexes in all cases and the most signifi cant fall was for lower ages and more signifi cant in women than in men. Figure 7 Logarithms of Age Specifi c Female Mortality Rates in the Czech Republic in Years 1970-2008 Source: HMD data, own calculations Source: HMD data, own calculations A two-stage Lee-Carter method was subsequently applied to these one-year age specifi c mortality rates divided according to sex separately for men and women and male and female mortality indexes were calculated 1 (ˆW t  , ˆM t  ) (Figure 8). 1 The calculation was made in the R program using the Demography pack (Hyndman, 2011). The ADF test of the unit root (Dickey, Fuller, 1979) showed that the male ˆM t  and female ˆW t  mortality indexes are nonstationary (Table 3) which shows bilateral dependence ˆM t  and ˆW t  . The correlation coeffi cient obtained from the correlation matrix of the residue of model VAR(1) indicates a strong linear dependence ˆM t  and ˆW t  in the same year (r = 0,896646). The Johansen cointegration test (Johansen, 1991 and1995) showed that the system contains one cointegration vector. Tables 6 and 7 contain diagnostic tests of this model. Using an estimated model it is then easy to construct a forecast for both mortality indexes. The forecast horizon selected was h = 42, i.e. 2050 (for comparability with CSO projections, 2009). Figure 11 contains points and 95% interval forecasts of both indexes. The interval forecasts were obtained by simulation (n = 1000). It is evident that forecasts calculated using the cointegrated Lee-Carter method are interlinked and do not diverge in time. i.e. for men and women separately. Figure 14 and Figure 15 contain their graphs (in a logarithmic scale) in the form of points forecasts of age specifi c male and female mortality rates in the Czech Republic for years 2009-2050. Forecasts for life expectancy at birth of men and women for years 2009-2050 ( Figure 14) can then easily be kept count of from calculated forecasts of age specifi c mortality rates. Table 9. Conclusion In the Czech Republic more signifi cant changes in mortality have appeared in recent years in the early 1990s. In this period there was a more signifi cant fall in mortality. There is a fall in mortality above all in medium and higher ages. More and more people live to a higher age. The number and share of old people in the population is increasing. Changes are taking place in the population age structure with the long-term and signifi cant fall in the Czech Republic of total fertility below the limit of simple reproduction and simultaneous prolongation of expectation of life at birth. The number of people of older and oldest age groups is increasing. In economic terms there is also an increasing share of people of a post-productive age. In Western Europe a quantitatively new type of decline in the level of mortality took place about thirty years ago which is described as the end of the third stage of the epidemiological transition, or its fourth stage. This stage is a period when degenerative and civilisation diseases predominate as classes of the causes of death. What is characteristic for this is the unexpected fall in mortality at a greater age. The intensity of death to cardiovascular diseases is falling above all. The Czech Republic has a certain delay in the development of mortality compared with developed countries. The fact that in recent years there was a fall in mortality is explained by the delay compared with the rest of Europe in the treatment of diseases of the circulatory system. In the 1950s antibiotics were applied en masse in the Czech Republic. However, at this time there were no investments into very costly technologies and instruments which allowed the treatment of diseases of the circulatory system. Drugs for the prevention and treatment of diseases of the circulatory system were not available to most of the population until after 1990. In Western Europe these methods and drugs had already been applied at the turn of the 1970s. Great advance came in the so-called by-pass, a surgical procedure to circumvent the damaged heart vessels. Promotion and maintaining a healthy lifestyle also play an important role in reducing mortality and related issues of the prevention of diseases and promotion of responsibility for one's own health. The promotion of physical activity, monitoring cholesterol levels, overweight, increased blood pressure, offer of healthy nutrition, discouragement of smoking, alcohol consumption is a far more matter of course today. The Czech Republic has a greater chance of further prolonging life expectancy by reducing mortality at middle age above all for diseases of the circulatory system and malignant neoplasms. The fact that mortality developed and, as arises from our estimate (and from the estimate of the Czech Statistical Offi ce) will continue to develop favourably, is a great success for a developed society. However, on the other hand it is a challenge for economists and for the pension funds. Society will have to solve a number of economic problems which arise from the changing relation between the population in a post-productive and economically active age. Pension funds will have to reassess or make their assumptions more specifi c about the development of mortality on which their fi nancial plans are based. The aging of the population depends on a number of other areas and in this context economists talk of the so-called silver economy which is targeted at fellow-citizens. Today many businessmen and manufacturers realise that room is appearing for products and services geared towards older age groups. The fact that the number of people in older and the oldest age groups is increasing signifi cantly must be considered and society must adapt to this because older people will also form a highly signifi cant electoral base. The ageing population and problem of longevity is currently becoming a highly discussed topic. It is not only the foreground of interest to demographers and economists. It is becoming a constantly more often discussed topic in connection with the further social and political development of society. Of course the process of an ageing population must be looked at not only in terms of prolonging age. Demographic ageing must be understood as a new challenge for society. There are a number of issues as to whether society can cope in future with double the number of senior citizens and how the system of social and health care will be changed in this context. However it must be realised that Czech society is gradually becoming a longevity society. As has already been stated, these changes, which are the result of changes in mortality, will have an impact on a number of areas of society. For example, there will have to be greater focus in the area of healthcare and medicine on prevention so that people can live to a greater age in a relative good state of health. The structure of the labour market will also have to change. This also relates to changes in the education system when the system of lifelong learning will also have to be extended. Issues relating to migration are a separate question. Ageing population trends cannot be reversed by pro-population measures, they can only be moderated. Extraordinary attention needs to be paid in advance to the problems of an ageing population.
International Security Strategy and Global Population Aging To be successful, grand strategy requires objectives, concepts, and resources to be balanced appropriately with a view to defeating one’s enemy. The trouble is, of course, that Generals are always well prepared to fight the last war. In the words of Yogi Berra, predictions are always difficult, especially when they involve the future. Yet, grand strategy is all about the future. But how is one to strategize about a future that is inherently difficult to predict? One way to overcome this conundrum is to rely on independent variables that can be projected into the future with reasonable accuracy. Aside from environmental indicators, the most consistent of those is demography, specifically demographic change and difference. The demographic approach to international security leads to strategic conclusions about the integration of military, political, and economic means in pursuit of states’ ultimate objectives in the international system. This article is available in Journal of Strategic Security: https://scholarcommons.usf.edu/jss/vol3/iss4/7 Journal of Strategic Security Volume III Issue 4 2010, pp. 27-48 DOI: 10.5038/1944-0472.3.4.2 Journal of Strategic Security (c) 2010 ISSN: 1944-0464 eISSN: 1944-0472 27 International Security Strategy and Global Population Aging Christian Leuprecht Royal Military College of Canada christian.leuprecht@rmc.ca Introduction To be successful, grand strategy requires objectives, concepts, and resources to be balanced appropriately with a view to defeating one's enemy. The trouble is, of course, that Generals are always well prepared to fight the last war. In the words of Yogi Beara, predictions are always difficult, especially when they involve the future. Yet, grand strategy is all Abstract To be successful, grand strategy requires objectives, concepts, and resources to be balanced appropriately with a view to defeating one's enemy. The trouble is, of course, that Generals are always well prepared to fight the last war. In the words of Yogi Berra, predictions are always difficult, especially when they involve the future. Yet, grand strategy is all about the future. But how is one to strategize about a future that is inherently difficult to predict? One way to overcome this conundrum is to rely on independent variables that can be projected into the future with reasonable accuracy. Aside from environmental indicators, the most consistent of those is demography, specifically demographic change and difference. The demographic approach to international security leads to strategic conclusions about the integration of military, political, and economic means in pursuit of states' ultimate objectives in the international system. Waxing and waning neo-Malthusian "end-of-the-world-as-we-know-it" gestations notwithstanding, demography had long been relegated to the epiphenomenal margins of grand strategy. 3 Starting with Paul Kennedy's Preparing for the Twenty-First Century, followed by a wave of hawkish neo-conservative mongering about the perils of over-aging and immigration, 4,5 and spotlighted most recently in the U.S. National Intelligence Council's Global Trends 2025, fertility, mortality, and migration have been maturing in grand strategy as independent variables in their own right. Still, much of that literature feels a lot like neo-Nebuchadnezzarian harbingers of the devil's writing on the wall. Yet, the purpose of grand strategy is not to posit demography as destiny. The future is not necessarily all that bleak. Provided that the strategic implications of demography are both understood and acted upon, we can fashion our world in accordance with demographic change instead of having demographic change fashion our world for us. The world is at a demographic crossroads. Throughout all of history, high birth rates ensured predominantly young populations with few older people. War and epidemics, such as the plague, would intervene to depress population growth. 6 By contrast, depressed population growth today is a function of a historically unprecedented decline in birth rates. 7 That is, women are consistently having fewer or no children than at any previous time in history (for reasons that are beyond the scope of this article). Demographically, the world is entering virgin territory. On the one hand, demographic trends suggest that there will be more "heavy lifting" to do with respect to international security. On the other hand, some countries 29 are far better positioned to weather the impending demographic storm than others. Ergo, fewer countries will end up having to do more of the "heavy lifting" on security, and with fewer resources. Some countries are better positioned vis-à-vis demographic change than others, and some will even have security benefits accrue to them, especially as a result of the continental multiplier effect in North America that is generated by virtue of the demographic advantage and concomitant economic growth enjoyed by the United States. In the context of slowing economic growth, increased costs of labor and defense spending, no state or combination of states appears likely to overtake the United States' position of economic and military dominance. Haas argues that global population aging is likely to extend U.S. hegemony (because the other major powers will lack the resources necessary to overtake the United States' economic and military power lead), as these other states are likely to fall even farther behind the United States. 8 These demographic developments suggest that there is no other country on the horizon that is able to muster the Americans' combination of innovation, economic growth, and low ratio of spending on capital versus personnel (which is key to military dominance on the high-tech battlefield of fourth generation warfare). Global population aging is thus likely to generate considerable security benefits for North America. Rarely can analysts of politics claim to be documenting new phenomena. Population aging, however, is one of these revolutionary variables. Never before has humanity witnessed such dramatic, widespread aging among the world's most industrialized and powerful democracies. Two long-term demographic trends coincided to produce population aging: decreasing fertility rates and increasing life expectancy. Fertility rates refer to the average number of children born per woman in a given country. For a state to sustain its population (assuming zero net immigration), fertility levels must exceed about 2.1 children per woman. Today the United States is the sole liberal democracy that comes close to meeting this requirement. Most are well below this average and have been for decades. As Figure 1 shows, the proportion of the world's population that resides in advanced industrialized democracies will continue to decline: from 24 percent in 1980, to 18 percent today, and 16 percent by 2025. This is a remarkable reversal: Between 1700 and 1900, Europe's population and its overseas offshoots had doubled its proportion of the world's total popula-Leuprecht: International Security Strategy and Global Population Aging Produced by The Berkeley Electronic Press, 2010 tion from 20 percent to 40 percent. 9 As late as 1950, Europe, Japan, and North America together comprised roughly one-third of the world's population, compared to one-fifth today and under one-seventh by 2050. By 2030, that translates into an expected total increase of less than 40 million people by 2030 (primarily concentrated in North America as Europe's overall population starts to shrink) as opposed to 1.5 billion people in the rest of the world. In absolute terms, India's population will grow the most (by 240 million to 1.45 billion people), followed by an increase of 100 million in China, for a total population of 1.3 billion. Growth will also be strong throughout Africa, Latin America, and the Caribbean. Much of Eastern and Central Europe, Russia, Italy, and Japan, by contrast, will see their populations decline by as much as 10 percent. Bucking the trend are the traditional Anglo-Saxon settler countries, the United States, Canada, Australia, and New Zealand, where population growth between 2010 and 2025 is projected to exceed 10 percent. Its current growth rate of 1.4 percent notwithstanding, China's population, by contrast, is projected to start declining by 2025 (when it will officially be overtaken by India as the world's most populist country, although many demographers already believe India to 31 be more populous than China). Russia's population, by contrast, is projected to fall from 141 to 130 million by 2025 while its population ages rapidly. 10 While these developments have but a moderate effect on the pecking order among the world's three most populous countries, Table 1 shows that the impact on "the rise and fall" of other "great powers" (measured by population size) is marked. By 2025, for instance, the number of women of child-bearing age will be barely half of what it is today. The drag on Russian productivity is expected to be considerable (although it will be offset in the short term by rising rents from Russia's vast wealth in natural resources). In fact, Vladimir Putin has referred to the precipitous decline of Orthodox Slavs as the country's greatest security threat: "The most acute problem facing Russia today is demography," he told the Kremlin in his 2006 State of the Union Address. This is caused by large swaths of land being already under-populated and a combination of higher fertility rates and migration by ethnic minorities that are poised to eclipse Orthodox Slavs. 11 The scope of the aging process is remarkable. By 2050, at least 20 percent of the population in allied countries, but also in China and Russia, will be over sixty-five. In Japan it will be as high as one-third of the population. By 2050 China alone will have more than 330 million people over sixtyfive. Population aging, as Table 2 shows, is accompanied by a diffusion of absolute population decline. Russia's population is already decreasing by 500,000-700,000 people per year. The trends projected in these data are largely irreversible and are highly accurate. The reason for this certainty is simple: the elderly of the future are already born. Put another way, anyone over the age of forty in 2050 has already been born. Except for some global natural disaster, disease pandemic, or other worldwide calamity, the number of people in the world who are over sixty-five will grow exponentially over the coming decades. Even in democracies with comparatively good demographic prospects, the proportion of that cohort is projected to double by 2040. Outgrowing the Age of Major International Conflict Contrary to hawkish technological "fantasies" of high-tech international war, 12 then, the probability of a major international war actually continues to diminish as the world's population grows older. Specifically, the demographic challenges faced by China and Russia make an international military conflict with America increasingly unlikely. Haas refers to this as "a geriatric peace." 13 For the same reason, it is highly improbable that any disputes over the Arctic would ever escalate to the point of war. Global aging also increases the likelihood of continued peaceful relations between the United States and other great powers. Others have shown that the probability of international conflict grows when either the dominant country anticipates a power transition in favor of a rising state or states, or when such a transition actually occurs. 14 By adding substantial support to the continuation of U.S. hegemony, global aging counteracts either outcome. Thus, Haas surmises that an aging world decreases the probability that either hot or cold wars will develop between the United States and other great powers. 15 Raining on the Parade: When the Young and the Restless Move to the City, and Grow Old… before They Grow Rich Despite the predictions of Haas, global population aging is likely to make the twenty-first century a particularly dangerous time for U.S. international interests. Population aging will beset much of the world at some point this century. In fact, the aging problem in many developing states is likely to be as acute as for industrialized countries, but the former have the added disadvantage of growing old before growing rich, thus greatly handicapping their ability to pay for elder-care costs. 16 For example, in China the comparative advantage associated with a large working-age population relative to a small proportion of children and elderly starts to wane around 2015, a problem that is further exacerbated by a growing excess of men over women. 17 The ratio of working-age adults to elderly is projected to shrink from just under ten in 2000 to 2.6 by 2050 when China's median age is projected to be just over forty-five years of age. That median age will make China one of the oldest populations in the worldolder than Japan, the country with the oldest population today and a projected median age of forty-three by then. 18 If the strain on governments' resources caused by the cost of aging populations becomes sufficiently great, it has the potential to exacerbate systematically both the number of fragile states and the extent and depth of 35 that fragility. As fragile states are prospective havens for organized crime and terrorism, the prospect of having to contend with a proliferation of fragile states with fewer resources at the allies' disposal could prove the single greatest security challenge of this century. 19 This is complemented by an already reduced capacity to realize other key international objectives, including preventing the proliferation of weapons of mass destruction (WMD), funding nation-building, engaging in military humanitarian interventions, and various other costly strategies of international conflict resolution and prevention. Global population is expected to grow by 1.2 billion by 2025, an increase of not quite 20 percent from the current 6.8 billion. However, that is well below the rate of increase between 1980 and 2009 when the globe's population grew by 2.4 billion. While the rate of growth may be slowing, the impact of the absolute growth is still staggering. The populations of fifty countries are projected to grow by a third, in some cases by two-thirds, by 2025 (which, of course, places additional stress on natural resources, services, and infrastructure). These are predominantly large, Islamic countries of 60 million people or more that are located primarily in sub-Saharan Africa as well as the Middle East and South Asia. With the demographic transition progressing more rapidly in the Middle East and South Asia (Figure 2), the challenges associated with population growth, such as youth bulges, will be greatest in sub-Saharan Africa. Countries with so-called "youth bulges" (the proportion of the adult population aged 15-29) are depicted in Figure 3. 20 These countries have been shown to be at a greater risk for civil conflict due to strains on systems of schooling and socialization as well as un-or under-employment, concomitant propensity for deviance, and countries in which more than 60 percent of the population is under thirty, have been shown to be four times as prone to civil war than countries with mature populations. 21 Journal of Strategic Security, Vol. Another way to make the case for the correlation between fecundity, youth bulges, and the propensity for conflict is to examine the association between a country's position along the demographic transition and the outbreak of civil war (as shown in Figure 4): The further along a country's population is in the demographic transition, the lower the probability of civil war. Figure 4: Demographic Transition and Onset of Civil War Populations in the West Bank/Gaza Strip, Iraq, and Saudi Arabia will continue to grow and remain comparatively youthful; therefore, we can expect continued political instability and outmigration among those countries. Still, the youth bulge will be greatest in Afghanistan, Pakistan, the Democratic Republic of Congo, Nigeria, Guatemala, Iraq, Ethiopia, Angola, Chad and Yemen, producing population growth rates of over 2% annually (see Table 3) with populations in those countries doubling every 30-35 years. Even if fertility rates in Nigeria or Afghanistan were to decline, they are currently so high that, at best, each country might barely transition from a young to a youthful age structure by 2025. Although youth bulges are on the wane in the Middle East and Southeast Asia, by 2025 three-quarters of the countries with persistent youth bulges will be in sub-Saharan Africa. A key driver of this development is HIV/ AIDS, which delays the entry of populations with high incidence rates of infection through the demographic transition by compromising the elderly portion of the population. So, the bulk of conflict and political instability will continue to be scattered across the Middle East, Asia, and some Pacific islands, but is likely to be concentrated in sub-Saharan Africa. Since conflict is the single greatest "push factor" of migration, immigration pressures from sub-Saharan Africa to Europe (but also to places such as South Africa, as Lindy Heinecken's 2001 micro-study on the subject has demonstrated) are expected to continue unabated and may accelerate as climate change makes life even less viable in that part of the world. Migration and age structure have several connections, one of which is that the most mobile populations also tend to be youthful. That is, migrants are overwhelmingly between 15 and 35 years old. There are a number of reasons for this, but perhaps most importantly, these age groups stand to reap the greatest long-term payoff from migrating and they have the least to lose from being uprooted. Owing to the compound effect of migration and fertility, mega-cities are likely to continue as the locus of youth bulges. Ergo, they are likely to become hubs of volatility with the population influx vastly exceeding employment prospects, thus overtaxing services. 23 In fact, where the annual rate of urban population growth exceeds four percent, the probability of civil conflict has been found to be 40 percent; where the rate of growth is between one and four percent, it is half that at 20 percent, and where it is less than one percent it is 19 percent. In other words, disproportionately high rates of urbanization are associated with a disproportionately high probability of civil conflict. Since urban populations are both younger and more diverse than rural ones, one might also add a growing urban-rural divide and territorial differentiation as subsidiary challenges. For the first time in history, more people now live in cities than in the countryside. As urban growth outpaces national population growth by a factor of 1.5, the proportion of urban dwellers across the world is expected to rise to 57 percent by 2025. In the less-developed world, where population growth is greatest, however, three billion more people will live in cities (in addition to the 2.3 billion urban dwellers in 2005), a 50 percent increase from 42.7 percent in 2005 to 67 percent by 2050. In sub-Saharan Africa, the growth will be three-fold, from 3,000 today to one billion by mid-century. While much attention has been focused on the growth of mega-cities, most of the urban growth is expected to transpire in secondary centers along migratory crossroads. When troops and reconstruction/development funds are deployed in the service of international security and stability, it will, in all likelihood, be in this part of the world. When expeditionary, civil-affairs and psychological-operations capabilities are concerned, allied armed forces need to prepare accordingly. Many of the West's immigrants originate in countries from this arc of instability. Given the likelihood of future involvement in the provision of security in this part of the world, diasporas will become increasingly important to mission success (and legitimacy). In Search of…New Friends Among her allies, America will be shouldering a growing fiscal burden of expenditures on international security as the United State's proportion of the developed world's population and GDP continues to rise (see Figure 5). Cited in Jackson and Howe, 2008: Appendix 1, Section 5. Population aging will hamper the ability of a number of allies to "step up to the plate." Afghanistan may already provide some preliminary empirical evidence to this effect. Following the logic of relative population aging, the United States, Canada, the UK, and Australia are becoming relatively more important allies. As more countries, and especially NATO allies, face growing fiscal constraints, fewer allies will end up having to pay a growing share of the common international security interests. But even for those countries, which are relatively well-positioned, that will become increasingly difficult as they face their own fiscal challenges growing out of population aging. There will be a need to do more with less as population aging strains defense spending. Allied armed forces should not be expecting significant increases in their budgets or personnel. As funds become even scarcer than they already are, careful strategic planning will be imperative. For example, given the way international security will be developing, the potential need to deploy tanks is diminishing precipitously. With resources at a premium and personnel-to-capital expense ratio on the rise, the armed forces cannot afford procurement "errors". In light of elevated and increasingly disproportionate personnel to capital ratios, armed forces will be disinclined to expand their troop strength. On the contrary, they will be reducing troop levels to free up money for develop-43 ment and procurement. This will be especially difficult in a tight labor market, which will cause the costs for highly qualified personnel to rise significantly. In effect, the economic impact of population aging will challenge allied countries that lack the fiscal room necessary to maintain the extent of their global position and involvement, let alone adopt major new initiatives. Ergo, allies have little choice but to work actively through international institutions to preserve and enhance soft power. 24 Since the ability for NATO and its member countries to assert themselves on the ground will face mounting financial, personnel, and matériel constraints, the allies will have to maximize their returns from international institutions. Similarly, having to make do with less at home means allies will have to harness synergies among domestic institutions and government departments. The "whole-of-government" comprehensive approach is the future. As apprehensive as countries may be about contributing troops, as situations arise where they deem intervention is in their interest, fewer allies will be in a position to contribute; and those countries with more favorable demographic trends, such as Canada, Australia, and New Zealand, and also the UK, France, the Netherlands, and the Scandinavian countries, should prepare themselves (both at the level of mass psychology and operationally) to take on a greater share of the burden. This is not a normative observation but a sociological one: Among many of the traditional allies, the fiscal and defense capabilities are likely to erode further. So, if a country deems a given situation in need of intervention, it will have to put its money (and troops) where its mouth is. Yet, as some traditional allies across Europe struggle in their ability to contribute financial and military prowess to international missions, countries further along the demographic transition in Latin and Central America (e.g., Brazil and Mexico) and Asia (e.g., India and China) will start to benefit from international migration's human-capital and technologytransfer effect as educated and affluent expatriates return to their countries of origin. As populations throughout the Americas mature and their economies develop, their strategic significance grows. Preliminary evidence to this end can be found in Mexican financial contributions to the reconstruction effort in Haiti as well as Brazil's military leadership in Haiti. Together, these countries will become cognizant that stable countries right across the continent are in their best interest as "pull" factors increase with improved economic conditions, and "push" factors such as political instability in Haiti, persist. So, collaboration across the Americas is likely to grow. Yet many countries in the Americas harbor suspicions about US interests and, for political reasons, do not want to be seen as too cozy with the United States. This should provide an interesting opportu-44 nity for "middle-power" allies, such as Canada and Australia, to expand their traditional role of honest broker and take on a continental leadership role that should allow it to punch well above its weight. Moreover, as youth bulges transition into bulges in the working-age population, some Asian, Latin American, and North African countries have the potential to harness not only the economic returns of the demographic transition (Bloom, Canning and Sevilla, 2003) but also "democratic returns" (see Figure 6). ulation but relatively few children and elderly), prospects for improved education and higher standards of living are likely to become an impetus for political moderation. Since Iran's population structure will be more mature than that of its neighbors, demographically, the risk of its initiating international war is actually on the wane. Conclusion Four substantive implications follow: First, demographic developments suggest that the high-tech fantasy of the "big war" with countries such as China and Russia that the hawks are fretting about is a strategic folly whose pursuit ties increasingly scarce demographic and financial resources to a wrong-headed vision. Second, time is on "our" side: Maturing population structures will make some "rogue states," such as Iran, North Korea, and Venezuela, more politically stable. Third, youth bulges will emerge as a growing driver of political instability in select African, Middle Eastern, and Asian countries. Fourth, demographic convergence is providing a welcome opportunity to make new friends in the pursuit of global stability, especially in the Americas. Some of these claims have been advanced elsewhere. But the analytical implications for the strategic pursuit of soft power and new friends relative to the demographic context of political instability are novel. The aging crisis is less acute in some countries than in others. Where it is less acute, countries have better prospects to shape international security according to their national interests. Still, the magnitude of the costs will be unprecedented (due to the compound effect of diminished overall contributions and expanded demand), as will the constraints they will impose on defense spending. The more countries sustain their comparative demographic advantage and relatively superior ability to pay for the costs of their elderly population, the more we are likely to see a middle-power renaissance among those allied countries that continue to enjoy fairly favorable demographic developments. It is in the allies' strategic and defense interests to rein in the costs of old-age security and health care as much as possible, minimize the gap between elder-care obligations and resources set aside for them, raise the retirement age, and maintain as open an immigration policy as possible to keep their median age relatively low. Proactive policies that are designed both to take full advantage of the opportunities created by global aging while mitigating the costs created by this phenomenon will enhance international security through the twentyfirst century. The bad news is that demand for armed forces will grow as demographic determinants of domestic instability rise over the next twenty years. The good news is (1) that the demographic determinants of international war are on the decline and (2) that demographic projection allows us to pinpoint the likely hotspots. In other words, analysis of the demographic evidence suggests that armed forces should prepare for international interventions rather than international war. If the dictum that the generals are always well-prepared to fight the last war holds, then prevailing military strategy runs a real danger of having armed forces bet on the wrong horse. Owing to two competing trends, they will find it increasingly difficult to cope with growing demand for their services. Maintaining the armed forces' functional imperative in a tightening labor market means substituting capital for labor. Increased strain on demographic and fiscal resources means smaller but more capable, effective, and professional armed forces. But with soldiers' median age on the rise, and defense spending atrophying under competing political priorities in democratic countries with aging populations, the inclination will be to shift armed forces' dwindling fiscal resources from capital to labor. Owing to nuanced demographic trends and political structures, the crowding out will be more rapid and severe in some countries than in others. The downside of this trend is that fewer countries will need to bring a greater proportion of armed and fiscal resources to bear on a less secure world. The upside, however, is that demographic trends are also opening up opportunities to look for new friends as partners in international security. These developments place a premium on soft power, which include being intentionally strategic about international security regimes and institutions, and international collaboration among armed forces of traditional allies and demographically emerging powers.
Stabilization of Heegaard splittings For each g greater than one there is a 3-manifold with two genus g Heegaard splittings that require g stabilizations to become equivalent. Previously known examples required at most one stabilization. Control of families of Heegaard surfaces is obtained through a deformation to harmonic maps. Introduction While area minimizing surfaces have proved to be a powerful tool in the study of 3-dimensional manifolds, harmonic maps of surfaces to 3-manifolds have not been as widely applied, due to several limitations. A homotopy class of surfaces generally gives rise to a large space of harmonic maps. A harmonic map of a surface need not minimize self-intersections and may fail to be be immersed. In negatively-curved 3-manifolds there is a unique harmonic map in each conformal class of metrics on the domain, and smooth families of surfaces give rise to smooth families of harmonic maps [5], [7]. In this paper we study Heegaard splittings of 3-manifolds using families of harmonic surfaces. A genus g Heegaard splitting of a 3-manifold M is a decomposition of M into two genus g handlebodies with a common boundary. It is described by an ordered triple (H 1 , H 2 , S) where each of H 1 , H 2 is a handlebody and the two handlebodies intersect along their common boundary S, called a Heegaard surface. Two Heegaard splittings (H 1 , H 2 , S) and (H 1 , H 2 , S ) of M are equivalent if an ambient isotopy of M carries (H 1 , H 2 , S) to (H 1 , H 2 , S ). Every 3-manifold has a Heegaard splitting [14], and Heegaard splittings form one of the basic structures used to analyze and understand 3-manifolds. Corresponding to the Heegaard splitting is a family of surfaces that sweep out the manifold, starting with a core of one handlebody and ending at a core of the second. This family is geometric controlled by deforming it to a family of harmonic maps. When the manifold is negatively curved, harmonic maps of genus g surfaces have uniformly bounded area. In the manifolds we consider, the geometry forces small area surfaces to line up with small area cross sections of the manifold. As a result we obtain obstructions to the equivalence of distinct Heegaard splittings. A stabilization of a genus g Heegaard surface is a surface of genus g + 1 obtained by adding a 1-handle whose core is parallel to the surface. Such a surface splits the manifold into two genus g + 1 handlebodies, and thus gives a new Heegaard splitting. Two genus g Heegaard splittings are kstably equivalent if they become equivalent after k stabilizations. Any two Heegaard splittings become equivalent after a sequence of stabilizations [23]. An upper bound on the number of stabilizations needed to make two splittings equivalent is known in some cases. If G p and G q are splittings of genus p and q with p ≤ q, and M is non-Haken, then Rubinstein and Scharlemann obtained an upper bound of 5p + 8q − 9 for the genus of a common stabilization [18] . For all previously known examples of manifolds with distinct splittings, the splittings become equivalent after a single stabilization of the larger genus Heegaard surface. The question of whether a single stabilization always suffices is sometimes called the stabilization conjecture [9] (Problem 3.89), [20], [22], [21]. In Section 7 we show that this conjecture does not hold. There are pairs of genus g splittings of a 3-manifold that require g stabilizations to become equivalent. Theorem 1.1. For each g > 1 there is a 3-manifold M g with two genus g Heegaard splittings that require g stabilizations to become equivalent. We outline the idea in Section 2. In Section 3 we describe the construction of the 3-manifolds M g . We derive isoperimetric inequalities used in the proof in Section 4 and discuss deformations of surfaces to harmonic maps in Section 5. In Section 6 we show how a Heegaard splitting gives rise to a family of Heegaard surfaces that sweep out the 3-manifold. The proof of Theorem 1.1 is given in Section 7. Finally in Section 8we show how a somewhat weaker result can be obtained for an easily constructed class of hyperbolic manifolds. This result was presented at the American Institute of Mathematics Conference on Triangulations, Heegaard Splittings, and Hyperbolic Geometry held in December 2007. At this conference D. Bachman announced, using different methods, examples giving a lower bound of g − 4 for the number of required stabilizations. Outline of the argument Let M φ be a hyperbolic 3-manifold that fibers over S 1 with monodromy φ and letM φ denote its infinite cyclic cover. A pictorial representation of M φ is given in Figure 1. Cutting open M φ along a fiber gives a fundamental domain B of the Z-action on the infinite cyclic cover, which we call a block. Blocks are homeomorphic, but not isometric, to the product of a surface and an interval. They are foliated by fibers of M φ , which in B we call slices. By cutting open a cyclic cover of M φ we obtain a hyperbolic 3-manifold with as many adjacent blocks as we wish. The manifold M g used in our main result has pinched negative curvature. It contains two handlebodies H L and H R with fixed Riemannian metrics, separated by a region homeomorphic to the product of a surface of genus g with an interval. This intermediate piece is hyperbolic and isometric to 2n adjacent blocks. The first n blocks form a submanifold called L and the next n form a submanifold called R, as in Figure 2. The value of n can be chosen as large as desired without changing the geometry of H L and H R . Details are in Section 3. M g has two obvious Heegaard splittings E 0 = (H L ∪ L, H R ∪ R, S) and where S is a surface of genus g separating H L ∪ L and H R ∪ R and −S indicates S with reversed orientation. We will show that these splittings are not k-stably equivalent for k < g. Let G 0 be the Heegaard splitting obtained by stabilizing E 0 (g − 1) times and G 1 be the Heegaard splitting obtained by stabilizing E 1 (g − 1) times. If G 0 and G 1 are equivalent, then there is an isotopy {I s , 0 ≤ s ≤ 1} of M g that induces a diffeomorphism of M g carrying G 0 to G 1 , A family of Heegaard splittings {G s = I s (G 0 ), 0 ≤ s ≤ 1} interpolates between G 0 and G 1 . Associated to each Heegaard splitting G s is a family of surfaces F s,t sweeping out M g from one core to the other. In Section 5 we show that such a family of surfaces can be deformed to a family of harmonic, or energy minimizing, maps. These harmonic surfaces have area uniformly bounded by a constant A 0 . This area bound restricts the way that surfaces can divide up the volume of M g . The surface cannot simultaneously split in half the volumes of L and L ∪ R. In Section 7 we use this to show that surfaces in such bounded area families cannot interpolate between G 0 and G 1 . We conclude that E 0 and E 1 are not k-stably equivalent for k < g. Remark. The two Heegaard splittings E 0 and E 1 of M g do become equivalent after g stabilizations. After adding g trivial handles, a Heegaard surface can be isotoped to a surface formed by taking two genus g slices connected by a thin tube. This can be isotoped so that one slice splits the volume of L in half while the second does the same to R, and together they bisect the volume of L ∪ R. This type of surface arises in a family of bounded area surfaces interpolating between genus 2g stabilizations of E 0 and E 1 . Construction of M g In this section we describe a certain Riemannian 3-manifold M g of Heegaard genus g, based on the work of Namazi and Souto [12]. In Section 8 we give a simpler construction of a hyperbolic manifold that gives slightly weaker lower bounds on the number of stabilizations needed to make two Heegaard splittings equivalent. We begin with a hyperbolic manifold M φ that fibers over the circle with fiber a genus g surface. Fix a fibration of M φ with fibers {S t , 0 ≤ t ≤ 1} satisfying φ(S 0 ) = S 1 . Define a block B to be the manifold obtained by cutting open M φ along S 0 and a block manifold B n to be a union of n blocks, formed by placing end to end n copies of B. The manifold B n is topologically, though not geometrically, the product of a surface of genus g and an interval. Its geometry can be obtained by cutting open the n-fold cover of M φ along a lift of S 0 . We call the fibers of B n slices and label them by S t , 0 ≤ t ≤ n. We label the union of all fibers between S t 1 and S t 2 by The properties we desire for the manifold M g are that it is a union of four pieces as in Figure 2. Two pieces are genus g handlebodies, H L and H R , and the other two L and R are each homeomorphic, though not isometric, to a product F g × [−1, 1], where F g is a surface of genus g. L and R are each hyperbolic, and isometric to a block manifold B n . The sectional curvatures of M g are pinched between −1 − 0 and −1 + 0 , where 0 > 0 can be chosen arbitrarily small. For our purpose we take 0 = 1/2. Namazi-Souto produced manifolds very similar to M g [12]. While the manifolds they construct are suitable for the constructions we give, the argument is simpler if we modify the metric slightly so that the middle partt of M g is precisely, rather than approximately hyperbolic. We require the metric on M g to be isometric to a union of blocks on L ∪ R. This can be arranged by perturbing the manifold M in Theorem 5.1 of [12], which has sectional curvatures pinched between −1 − and −1 + where > 0 can be chosen arbitrarily small. This manifold has a product region separating two handlbodies with geometry close (in the C 2 -metric) to the geometry of a block. A small C 2 -perturbation gives a new metric on M with sectional curvatures between −3/2 and −1/2 everywhere and exactly isometric to a block on an interval separating two handlebodies. Now cut open the manifold along a slice in this hyperbolic block and insert a copy of B n to obtain M g , with n as large as desired. However large the choice of n, the geometry of M g is unchanged on the two complementary handlebodies of B n , which we call H L and H R . Isoperimetric inequalities In this section we develop some isoperimetric inequalities for curves in surfaces and for surfaces in 3-manifolds. The following isoperimetric inequality holds for a curve in hyperbolic space H 2 [2]. A proof can be obtained by using symmetry to show that a round circle bounds at least as much area as any other curve, using no more length. Explicit formulas for length and area in hyperbolic space then imply the inequality. The curve does not need to be embedded or connected. We now consider more general isoperimetric inequalities for curves in surfaces and surfaces in 3-manifolds. In these settings we consider the areas and volumes of 2-chains and 3-chains with boundary. We first give an isoperimetric inequality for curves in a compact family of Riemannian metrics on a surface. Lemma 4.2. Let (S, g r ) be a closed 2-dimensional surface with Riemannian metric g r , r ∈ U , where g r is a family of metrics smoothly parameterized by a compact set U ⊂ R n . There is a constant K such that for any surface S in the family (S, g r ), (1) A null-homotopic curve c in S is the boundary of a disk f : Proof. First consider the case where c is a null-homotopic curve. For a hyperbolic surface S, c lifts to H 2 where it bounds a mapped in disk of area less than length(c) by Lemma 4.1. The surface (S, g r ) is conformally equivalent to a hyperbolic surface, so there is a hyperbolic metric (S, g) and a diffeomorphism from (S, g) to (S, g r ) that stretches or compresses any vector in the tangent space of (S, g) by a factor of at most λ. A single choice of λ can be made for all the surfaces in the compact family (S, g r ). Lengths of curves measured in (S, g) differ from lengths in (S, g r ) by a factor of at most λ and the area of a region in (S, g) differs from the area in (S, h r ) by a factor of at most λ 2 . By comparing the area of a spanning disk and Length(c) in (S, g), we see that in (S, g r ) The same argument applies for a collection of null-homotopic components, There is a constant that gives a lower bound for the injectivity radius for the family of metrics h r . If the length of c is less than than each of the components of c is null-homotopic and the above bound holds for the area of a collection of spanning disks. Now consider a general null-homologous curve c of length greater than . A small perturbation and orientation preserving cut and paste transforms c into a union of disjoint embedded components that is homologous to c. A homology between this embedded curve and c has arbitrarily small area, so we can assume that c is a collection of embedded disjoint curves. Each side of c gives a 2-chain with boundary c, and so Area(S) gives an upper bound to the area of a 2-chain X 2 spanning c. Therefore The Lemma follows with K = λ 3 + Area(S)/ . To simplify the next calculations, it is convenient to work with chains having Z 2 -coefficients. With this choice we do not need to pay attention to orientations, signs, or multiplicity other than even or odd. A 2-chain spanning a general position curve in a surface is obtained by 2-coloring the complement of the curve, dividing the surface into black and white regions separated by the curve. The area of a spanning 2-chain is the area of either the white or the black subsurface, and has a value between zero and the area of the surface. The two choices of spanning 2-chain have area that sums to the area of the entire surface. A similar statement applies for 3chains spanning a surface in a 3-manifold. We now analyze the geometry of surfaces in block manifolds. The metrics on the slices S t of a block manifold, up to isometry, are parameterized by a closed interval, so Lemma 4.2 applies. Therefore there is a constant K 1 independent of n, such that a null-homologous curve c on a slice S t bounds a 2-chain X 2 in that slice.with Each slice is an incompressible surface in B n and therefore with its induced metric has injectivity radius greater or equal to that of M φ . Set to be the constant = min 1] {injectivity radius(M φ ), Area(S t )/10K 1 } Now consider a proper separating surface (F, ∂F ) in (B n , ∂B n ). For each t, the complement of F ∩ S t in the slice S t can be 2-colored. In slices S t where Length(F ∩S t ) ≤ , each curve of F ∩S t is null-homotopic. Lemma 4.2 implies that there is a collection of disks spanning the components of F ∩ S t whose total area is less than Area(S t )/10. Call the subsurface of S t missing these disks the large side of F ∩S t . Alternately, this subsurface is the largest of the connected complementary components of F ∩ S t in S t . If the genus of F is less than g then an arc α from the large side of F ∩ S t 1 to the large side of F ∩ S t 2 intersects F algebraically zero times. Proof. Cap off the curves in F ∩ S t 1 , F ∩ S t 2 , each of which is shorter than the injectivity radius, by adding disks missing the large side of each surface. These disks do not intersect the arc α. The resulting closed surface in the region between S t 1 and S t 2 , has genus less than g. Such a surface is nullhomologous in the region between these two slices, which is homeomorphic to the product of a genus g surface and an interval, and therefore cannot algebraically separate them. The next lemma gives a 3-dimensional isoperimetric inequality for surfaces in block manifolds. Note that Lemma 4.4 is false if the genus k ≥ g. A slice can of genus g can split the volume of B n in half. Lemma 4.4. Let (F, ∂F ) ⊂ (B n , ∂B n ) a null-homologous proper surface in a block manifold, having genus less than g and area less than some constant A 0 . Then there is a constant B 0 , independent of n, such that F bounds a 3-chain X in B n with Volume(X) < B 0 . For n sufficiently large, Proof. Since F is null-homologous, we can 2-color its complement. This gives two 3-chains with boundary F , and we take X to be the one of smaller volume. The coarea formula [6] implies that the volume of X is comparable to the integral of its cross-sectional areas. Namely there is a constant C 1 such that The coarea formula also gives an inequality between the area of F in B n and the lengths of its intersections with slices. There is a constant C 2 such that The constants C 1 and C 2 are determined by the geometry of a single block B, and do not depend on n. We estimate the volume of X by integrating the area of its cross sections. To do so, we need to consistently specify for each slice S t which of the two complementary subsurfaces of F ∩S t lies in X. If Length(F ∩S t ) < then we take the side of F containing the complementary subsurface of smaller area in S t . This subsurface has area less than K 1 by Lemma 4.2 and misses the large side of F ∩ S t . Each such subsurface lies on the same side of F (mod 2) by Lemma 4.3. For slices in which Length(F ∩ S t ) ≥ the construction given in Lemma 4.2 gives a bound K 1 · Length(F ∩ S t ) for the area of both complementary sides in S t of F ∩ S t , and in particular for the side lying in X. By Equation (1), X has volume bounded above by This bound is independent of n, so for n sufficiently large it is less than Volume(B n )/10. Finally we give a 3-dimensional isoperimetric inequality that implies small area surfaces in a Riemannian manifold bound regions of small volume. The argument follows the line of an argument of Meeks [13]. The result is valid for both integer and Z 2 coefficients, though we need only the latter. Proof. By cut and paste we can reduce to the case where F is embedded, though not necessarily connected. The result is true for a surface in the unit ball in R 3 , where we have the inequality for the area of a surface A and the volume V of the region it bounds. For a fixed Riemannian metric on a ball, there is a bound λ to the maximum stretch or compression of the length of a vector. This implies a bound on the volume increase of a region of λ 3 and a bound on the change in the area of a surface of λ 2 . It follows that for some constant C depending only on the metric. Now take a cell decomposition of M with a single 3-cell R. By the coarea formula and the area bound for F , for sufficiently small δ 0 the 2-complex forming the boundary of this 3-cell ∂R can be perturbed so that the length of its intersection with F is arbitrarily short. The isoperimetric inequality for the boundary 2-sphere of R allows us to surger F along short intersection curves with the boundary, while keeping its total area less than a constant δ 1 satisfying lim δ 0 →0 δ 1 = 0. The surgered surface lies in a Riemannian 3-ball in which we can apply Equation 3. The resulting small volume region in a ball gives a region X in M with ∂X = F , since X has canceling boundary (mod 2) on ∂R. Furthermore the volume of X approaches 0 as δ 0 → 0. In particular we can arrange that Volume(X) < 0 by choosing δ 0 sufficiently small. Harmonic and bounded area maps Eells and Sampson showed that a map from a Riemannian surface F with metric h to a negatively curved manifold M is homotopic to a harmonic map [5]. This harmonic map is unique unless its image is a point or a closed geodesic [7], cases that we will not need to consider. It is obtained by deforming an initial map of a surface with a given Riemannian metric (or conformal structure) in the direction of fastest decrease for the energy, called the tension field . The resulting harmonic map depends smoothly on the metrics of both the domain and the image, as shown in Theorem 3.1 of [3]. See also [19]. A homotopy class of maps from a surface to a 3-manifold is elementary if it induces a homomorphism of fundamental groups whose image is trivial or cyclic. Since harmonic maps satisfy a maximal principle, they are more negatively curved then the ambient manifold wherever they are immersed [19]. The Gauss-Bonnet theorem then implies an area bound proportional to the genus. The immersion assumption can be removed by an approximation of a general map by immersions. A detailed argument is found in Theorem 3.2 of [10]. Lemma 5.2. A harmonic map f : F → M from a genus g surface to a hyperbolic 3-manifold M has area bounded above by 4π(g − 1). If M has sectional curvatures between −s and −r, for 0 < r < s, then its area is bounded above by 4π(g − 1)/r. For any Riemannian metric on F , a basic inequality relates the energy and area of a map f [3]: Equality occurs precisely when f is almost conformal, i.e. conformal except possibly at finitely many singular points with zero derivative. In particular, equality holds for an isometric immersion, an immersion of a surface into a Riemannian manifold with the induced metric on the surface. Proof. We begin by taking the induced metric on the surface F using each of the family of maps f u,0 . This gives a family of isometric immersions of F , for each of which the energy equals twice the area. The Eells-Sampson process of deforming along the tension field gives an energy decreasing deformation that converges to a harmonic map [5]. Since the energy is non-increasing during this flow, and the area is bounded above by twice the energy, the area of each surface in this flow is bounded above by its initial value. Sweepouts Consider a family of maps k : F × (−1, 1) → M such that lim t→±1 Area(k(F, t)) = 0 and let F t denote the surface k(F, t). By Lemma 4.5 we know that the volume of a 3-chain bounded by F t also approaces zero as t → ±1. It follows that there is a β > 0 such that one side of F t has volume less than Volume(M )/10 if t ∈ (−1, −1+β]∪[1−β, 1). In particular F −1+β bounds a 3chain C −1+β of volume less than Volume(M )/10 and similarly F 1−β bounds such a chain C 1−β . Consider the 3-cycle Z ∈ H 3 (M ; Z 2 ) formed by adding the 3-chains k(F × [−1 + β, 1 − β]), C −1+β and C 1−β . We say the family of maps k has degree one if Z represents the fundamental homology class of M (with Z 2 -coefficients) and in that case we define it to be a sweepout of M . Note that changing the value of β continuously changes the volume of Z, while a change in the homology class of Z would cause the volume to change by the volume of M . Thus the choice of β does not affect the question of whether k has degree one. We digress somewhat to point out that our constructions give a new approach to constructing sweepouts of bounded area. Pitts and Rubinstein showed the existence of an unstable minimal surface in a 3-manifold using a minimax argument. The minimal surface they construct has maximal area in a 1-parameter family of surfaces obtained from a sweepout given by a strongly irreducible Heegaard splitting. Since a genus g minimal surface in a hyperbolic manifold has area less than 4π(g − 1), this implies an area bound for each surface in the sweepout. The existence of a sweepout composed of bounded area surfaces has implications on the geometry and Heegaard genusof a 3-manifold, as noted in Rubinstein [17] and Bachman-Cooper-White [1]. Harmonic maps give an alternate way to obtain the same area bound implied by Pitts-Rubinstein, without finding a minimal surface and without assuming that a Heegaard splitting is strongly irreducible. Theorem 6.1. If M is a hyperbolic 3-manifold with a genus g Heegaard splitting then M has a sweepout whose surfaces each has area bounded above by 4π(g − 1). Proof. Parametrize each surface of the Heegaard splitting to obtain a smoothly varying family of embedded maps {f t : (F, g t ) → M, −1 < t < 1}, where g t is the induced metric on F under the map f t . By Lemma 5.1, the family of maps is deformable to a smooth sweepout of harmonic maps {h t : (F, g t ) → M }. Each map h t has area bounded above by 4π(g − 1), by Lemma 5.2. Given a sweepout of a 3-manifold M and two subsets L and R, we can characterize the direction of the sweepout relative to L and R, describing which of L and R is first engulfed. Let β be a constant that satisfies the conditions of Lemma 4.5, so that for −1 < s ≤ −1 + β, F s bounds a 3-chain of volume less than Volume(M )/10. If −1 < t < −1 + β define K t to be the 3-chain C t with boundary F t , given by the smaller volume side of F t . If t ≥ −1 + β define K t to be the 3-chain obtained by adding the 3-chains k(F × [−1 + β, t]) and C −1+β , again with boundary F t . As t increases on (−1, 1) the volume of K t changes continuously, and satisfies lim t→−1 Let Y denote the set of points t ∈ (−1, 1) where K t contains half the volume of Volume(L ∪ R), For a generic sweepout this set is finite and odd. and an R point otherwise. A sweepout is an LR-sweepout if it has an odd number of L points and an RL-sweepout if if it has an odd number of R points. The Heegaard splitting E 0 gives rise to an LR-sweepout that begins with surfaces near a graph at the core of H L , sweeps out L ∪ R with embedded slices, and ends with surfaces that collapse to a graph at the core of H R . The stabilized Heegaard splitting G 0 gives rise to an LR-sweepout by embedded surfaces of genus 2g − 1. A stabilization adds a loop to each of the graphs forming the cores of E 0 , the two loops linking once in M g , and each crossing the separating surface S. Between the resulting graphs is a product region which can be filled with Heegaard surfaces of the stabilized splitting. See Figure 3, which shows two cores and a surface in G 0 that has been stabilized once. Assuming that G 0 and G 1 are equivalent, composing the resulting sweepout of G 0 with the diffeomorphisms {I s , 0 ≤ s ≤ 1} gives a family of genus 2g − 1 sweepouts connecting G 0 and G 1 . Lemma 6.1. Suppose that E 0 is equivalent to E 1 after (g −1) stabilizations. Then there is a constant A 0 , independent of the number of blocks n, and a family of surfaces satisfying the following conditions: (1) F has genus 2g − 1 is an RL-sweepout. Proof. M g is formed from the union of a handlebody H L , n-blocks forming L, n-bocks forming R, and a second handlebody H R . The geometry of a single block, and of each of H L , H R , does not depend on n. Pick a core for each of H L , H R and foliate the complement of this core in each handlebody by embedded Heegaard surfaces, connecting the core to the slice that forms the boundary of H L . Do the same for H R , and fill L and R with interpolating slices. This gives a foliation of the complement of the two cores in M g by genus g leaves {L t , −1 < t < 1}. Now stabilize by adding (g−1) loops to each core graph and (g−1) handles to each surface between the two core graphs. By adding thin handles, this can be achieved while increasing the area of any surface by less than a 0 and with the additional (g − 1) 1-handles bounding a region of volume less than v 0 , where the constants a 0 and v 0 can be chosen arbitrarily small, independently of n. For our purpose it suffices to pick v 0 < V B /10, where V B is the volume of a single block, and a 0 < 1. Let F be a surface of genus 2g − 1 and construct maps {f 0,t : F → M g , −1 < t < 1} that smoothly parametrize the stabilized surfaces. We will refer to f 0,t (F ) as F 0,t . Set the value of the constant A 0 to be the largest area of the surfaces {F 0,t , −1 < t < 1}. Note that our construction gives a value for A 0 that is independent of the number of blocks n in M g . The Heegaard splitting G 0 = (H 1 ,H 2 ,S) obtained by stabilizing E 0 (g − 1) times is assumed to be equivalent to the reversed splitting G 1 = (H 2 ,H 1 , −S), whereS = F 0,0 and −S indicates the orientation ofS has reversed. So an isotopy I s , 0 ≤ s ≤ 1 from the identity map I 0 to a diffeomorphism I 1 carries (H 1 ,H 2 ,S) to (H 2 ,H 1 , −S). Construct the family of surfaces f s,t : F → M g by defining For any constant α > 0 we can assume, by stretching out a collar around the invariant surface F 0,0 , that I 1 carries F 0,t to F 0,−t for each t ∈ [−1 + α, 1 − α]. For α sufficiently small the embedded surfaces lie in small neighborhoods of the images of the core graphs of G 0 under I s , have area uniformly bounded above by a 0 , and bound submanifolds having volume less than v 0 . The Heegaard splitting E 0 gives rise to a sweepout of M g by genus g surfaces {L t , −1 < t < 1} starting near the core of H L and ending near the core of H R . This sweepout foliates the complement of the two core graphs. The surface L t bounds a 3-chain K t that fills up the side containing {∪L t : t < t}. K 0 bisects the volume of L ∪ R and contains L but not R. The surfaces L t , when parametrized, give an LR-sweepout with Volume(K 0 ∩ L) = nV B while Volume(K 0 ∩ R) = 0. The surface F 0,t , obtained by stabilizing L t , bounds a 3-chain K 0,t whose volume of intersection with L and R differs by less than v 0 < v B /10 from that of K t . Therefore F 0,t also gives an LR-sweepout. By the same argument applied to the stabilization of E 1 , we have that F 1,t gives an RL-sweepout. All properties now follow. We now show that a path of sweepouts in M g whose surfaces have area uniformly bounded by a constant A 0 cannot start with an LR-sweepout and end with RL-sweepout, if n is sufficiently large. Define V L = Volume (L) = Volume (R) sot that 2V L = Volume (L ∪ R). Lemma 6.2. Suppose A 0 is a constant and that M g is constructed with 2nblocks. Then for n sufficiently large there does not exist a smooth family of maps from a surface F to M g satisfying the following conditions: (1) F has genus less than 2g. (2) For each fixed s, h s,t : F → M g is a sweepout. (3) Each surface h s,t (F ) has area bounded above by A 0 . Proof. Suppose such a family exists and let F s,t denote h s,t (F ). For each F s,t we construct a 3-chain K s,t with boundary F s,t . Pick β sufficiently small to satisfy the conditions of in Lemma 4.5 for each 0 ≤ s ≤ 1. Then for each s and each −1 < t 0 ≤ −1 + β there is a 3-chain C s,t 0 with boundary F s,t 0 and volume less than V L /10. If −1 < t < −1 + β take K s,t to be equal to C s,t . If −1 + β < t < −1 take K s,t to be the sum of C s,−1+β and −1+β≤t ≤t F s,t . The volume of K s,t varies continuously with s and t, giving a continuous function from [0, 1] × (−1, 1) → R. For each fixed s the surfaces F s,t sweep out M g with degree one , so If necessary, perturb the value 1/2 used to to define Q to a regular value of the volume function evaluated on the rectangle. We then have that Q is a 1-manifold and that any path from the edge t = −1 to the edge t = 1 of the (s, t) rectangle must cross Q. It follows that there is a path contained in Q connecting the edges s = 0 and s = 1 of the (s, t) rectangle, as in Figure 4. We now claim that each component of Q consists entirely of L points or entirely of R points. If a component of Q has both L points and R points then there is a point (s, t) ∈ Q on that component where Volume(K s,t ∩ L) = Volume(K s,t ∩ R) and since these two volumes sum to V L , we have The area of F s,t is bounded above by A 0 , so Lemma 4.4 implies that for n sufficiently large, F s,t bounds a 3-chain in L with volume less than (0.1)V L . Since any two 3-chains in L with the same boundary have volumes in L that sum to a multiple of V L , we have As a fraction of This contradicts Equation 4. We conclude that components of Q consist either entirely of L or entirely of R points. A component curve in Q either meets each edge s = 0 and s = 1 once, meets one edge of the (s, t) rectangle twice, or is disjoint from both. The parity of the number of L points and the number of R points on the two families h 0,t (F ) and h 1,t (F ) is the same for each edge s = 0 and s = 1, and thus either both are LR-sweepouts or both are RL-sweepouts. We now show how to replace a family of sweepouts that starts and ends with sweepouts whose surfaces have area less than A 0 , but with no control on the area of surfaces in the intermediate sweeputs, with a homotopic family of sweepouts whose surfaces have area uniformly bounded by A 0 . Lemma 6.3. Let {f s,t,0 : F → M g , 0 ≤ s ≤ 1, −1 < t < 1} be a family of surface maps such that (1) The genus of F is less than 2g. (3) Each surface in the two sweepouts f 0,t,0 (F ) and f 1,t,0 (F ) has area bounded above by a constant A 0 . Then if n is sufficiently large, there is a family of maps such that (1) For each fixed s, u; {f s,t,u : F → M g , −1 ≤ t ≤ 1} is a sweepout. Proof. We define a family of Riemannian metrics h s,t on F for each 0 ≤ s ≤ 1, −1 < t < 1 by taking the induced metric pulled back from M g by the immersion f s,t,0 . Then each map f s,t,0 is conformal and has energy is equal to twice its area. We flow the family f s,t,0 to a family of harmonic maps f s,t,1 as in Lemma 5.3, with the area of each harmonic surface f s,t,1 (F ) less than A 0 . Each surface in the sweepout f 0,t,0 (F ) has area bounded above by A 0 , and this area bound is maintained for each surface {f 0,t,u (F ), 0 ≤ u ≤ 1} in the homotopy to harmonic maps, as in Lemma 5.2. Similarly the homotopy to harmonic maps of the sweepout f 1,t,0 (F ) given by {f 1,t,u (F ), 0 ≤ u ≤ 1} has surfaces whose area is uniformly bounded above by A 0 . Thus if n is sufficiently large, the sweepout {f 0,t,1 (F )} is an LR sweepout, by Lemma 6.2 and similarly {f 1,t,1 (F )} is an RL sweepout. Inverting Heegaard surfaces We now prove the main result: Proof of Theorem 1.1. We begin with the two genus g Heegaard foliations, E 0 and E 1 of M g . We will show that these splittings are not k-stably equivalent for k < g if n is sufficiently large. It suffices to take k = g − 1. Assume to the contrary that for all n, G 0 and G 1 are equivalent after g −1 stabilizations. For n large, Lemma 6.1 implies there is a family of sweepouts {f s,t,0 : F → M g , 0 ≤ s ≤ 1, −1 < t < 1} of genus 2g−1 with {f 0,t,0 (F ), −1 < t < 1} an LR-sweepout, {f 1,t,0 (F ), −1 < t < 1} an RL-sweepout and that each surface in the two sweepouts f 0,t,0 (F ) and f 1,t,0 (F ) has area bounded above by A 0 . Lemma 6.3 implies that this family can be deformed to a new family of sweepouts {f s,t,1 : F → M g , 0 ≤ s ≤ 1, −1 < t < 1} in which {f 0,t,1 (F ), −1 < t < 1} is an LR-sweepout, {f 1,t,1 (F ), −1 < t < 1} is an RL-sweepout and each surface f s,t,1 (F ) has area bounded above by A 0 . This is a path of sweepouts all of whose surfaces have area less than A 0 that connects an LR-sweepout to a RL-sweepout. Lemma 6.2 states that no such path of sweepouts can exist, contradicting the assumption that fewer than g stabilizations can make E 0 and E 1 equivalent. A Hyperbolic Example The Riemannian manifolds M g used in our construction were negatively curved, but not hyperbolic. We now show how to construct a family of hyperbolic manifolds that give somewhat weaker lower bounds on the number of stabilizations required to make two genus-g splittings equivalent. Note that block manifolds cannot be isometrically embedded into a hyperbolic 3-manifold in which their slices are Heegaard surfaces, because the fibers of a surface bundle lift to planes in the universal cover. However there is an isometric embedding of block manifolds into a hyperbolic manifold in which slices are separating incompressible surfaces. These surfaces become Heegaard surfaces after two stabilizations. Let N 0 be a hyperbolic 3-manifold that is a union of two I-bundles over a non-orientable surface, glued along their common genus k boundary surface, where k is an integer greater than one. Such hyperbolic manifolds are double covered by a hyperbolic surface bundle over S 1 . Some explicit examples can be found in [16]. The boundary surface of each I-bundle has a neighborhood isometric to a neighborhood of a fiber in its double cover. We can cut open along this fiber and insert a block manifold with arbitrarily many blocks to obtain a hyperbolic manifold N , still constructed as a union of two I-bundles. We let S denote a surface separating the two I-bundles in the center of the block manifold. Removing a neighborhood of an interval fiber from an I-bundle with a genus k boundary surface results in a handlebody of genus k + 1. Thus N has a genus g = k + 2 Heegaard surface S obtained by adding a 1-handle to each side of S, with the core of each 1-handle an interval in each I-bundle. The two orderings of these handlebodies give rise to two Heegaard splittings of N , and to corresponding sweepouts that fill the two I-bundles in opposite order. The arguments used on M g in Section 7 now apply to show that N has two genus g splittings that require no less than g − 4 stabilizations to become equivalent.
Relationship of mechanical impact magnitude to neurologic dysfunction severity in a rat traumatic brain injury model Objective Traumatic brain injury (TBI) is a major brain injury type commonly caused by traffic accidents, falls, violence, or sports injuries. To obtain mechanistic insights about TBI, experimental animal models such as weight-drop-induced TBI in rats have been developed to mimic closed-head injury in humans. However, the relationship between the mechanical impact level and neurological severity following weight-drop-induced TBI remains uncertain. In this study, we comprehensively investigated the relationship between physical impact and graded severity at various weight-drop heights. Approach The acceleration, impact force, and displacement during the impact were accurately measured using an accelerometer, a pressure sensor, and a high-speed camera, respectively. In addition, the longitudinal changes in neurological deficits and balance function were investigated at 1, 4, and 7 days post TBI lesion. The inflammatory expression markers tested by Western blot analysis, including glial fibrillary acidic protein, beta-amyloid precursor protein, and bone marrow tyrosine kinase gene in chromosome X, in the frontal cortex, hippocampus, and corpus callosum were investigated at 1 and 7 days post-lesion. Results Gradations in impact pressure produced progressive degrees of injury severity in the neurological score and balance function. Western blot analysis demonstrated that all inflammatory expression markers were increased at 1 and 7 days post-impact injury when compared to the sham control rats. The severity of neurologic dysfunction and induction in inflammatory markers strongly correlated with the graded mechanical impact levels. Conclusions We conclude that the weight-drop-induced TBI model can produce graded brain injury and induction of neurobehavioral deficits and may have translational relevance to developing therapeutic strategies for TBI. Introduction Traumatic brain injury (TBI) is one of the most common brain injuries caused by an external mechanical force, such as crushing, rapid acceleration or deceleration impact, and projectile penetration [1]. TBI is estimated to affect approximately 1.7 million residents, accounting for an expenditure of more than $76.5 billion in medical care systems every year in the United States [2]. Following TBI, temporary or permanent impairment of cognitive, physical, and psychosocial functions develops depending on the severity of injury [3]. To provide a more stable and controllable environment and obtain detailed mechanical insights into TBI, several animal models of TBI have been developed. One type of rodent model, known as weight-drop models, such as Marmarou's impact acceleration model, has been widely used to mimic diffuse axonal injury and concussion caused by falls or motor vehicle accidents in individuals with TBI [4,5]. Furthermore, in this model, Marmarou's weight-drop procedures can provide an easy and inexpensive method for producing graded brain injury in animals by simply altering the height from which the weights are dropped (1-2 m) [4,5]. However, a high mortality rate and low reproducibility after severe injury are the two main disadvantages of this model [4][5][6]. Previous studies have demonstrated that TBI induces sensorimotor and cognitive impairments in weight-drop-induced TBI models [7,8]. Although the detailed pathophysiology and TBI markers following TBI are currently under investigation, recent research shows significant post-TBI increases in glial fibrillary acidic protein (GFAP), bone marrow tyrosine kinase gene in chromosome X (BMX), and beta-amyloid precursor protein (APP), which may represent the levels of inflammatory or axonal injury and thus act as indicators of trauma severity [9][10][11][12][13][14][15][16]. For translational purposes, identifying and comparing the mechanical or kinematic properties during impact and the induced brain injury level between animals and humans is crucial. Previous studies have demonstrated that the level of mechanical responses such as linear acceleration or velocity during impact is associated with traumatic axonal injury or severity of behavior at 24 h post-injury [17,18]. However, detailed and accurate data regarding the external mechanical impact force, rapid acceleration, or deceleration that result in brain damage remain insufficient. Furthermore, although weight-drop-induced TBI in rats has been employed in various neuroscience studies [6,19,20], the literature regarding the time-course changes in neurobehavioral function and pathophysiological processes following TBI, which may provide insights into the underlying pathophysiology of the disease for future diagnostic purposes and therapeutic applications, is scant. Furthermore, the physiological and behavioral responses to various impacts to the brain similar to injury mechanisms in humans have rarely been investigated in animal models. The detailed relationship between the impact force and the severity of brain damage has not yet been fully explored. This lack of information regarding TBI animals may restrict the use of the knowledge obtained from TBI animal models. The present study applied a weight-drop TBI model designed previously [4,5] and used an accelerometer, a pressure sensor, and a high-speed camera to accurately measure the acceleration, impact force, and displacement, respectively. Furthermore, the measured data were used to investigate the relationship between the mechanical impact level and the severity of neurological and motor behavioral changes in rats. Methods Fifty-two male Sprague-Dawley rats (BioLASCO Taiwan Co., Ltd, Yilan, Taiwan) weighing between 364 and 425 g (i.e., 10-12 weeks of age) were used for the present experiment. All experimental procedures were preapproved by the Institutional Animal Care and Use Committee (IACUC) of Taipei Medical University (TMU) and followed by the TMU IACUC guidelines to treat animals humanely and reduce animal suffering by use of appropriate anesthesia and analgesics (IACUC Approval No. LAC-2013-0199). Rats were housed on a 12-h light/dark cycle in a temperature-and humidity-controlled animal center until experimental use at TMU. Animals were monitored twice daily prior and post-lesion and animal were humanely euthanized according to accepted pre-established endpoint criteria: loss of >20% body weight, labored breathing, hunched body, and lethargic animal. Furthermore, at the end of the study, all animals were humanely euthanized via carbon dioxide inhalation followed by cervical dislocation. Induction of brain injury and monitoring of impact parameters To minimize animal suffering and distress during surgery, animals were anesthetized using intraperitoneal injection of tiletamine-zolazepam (50 mg/kg, i.p.; Zoletil, Vibac, France) and xylazine (10 mg/kg; Rompun, Bayer, Leverkusen, Germany) 30 min prior to impact. To record the mechanical impact of the head during weight-drop, pressure was recorded using a load cell sensor (10 mm in diameter, 3 mm in thickness; Interface LBS-250, Scottsdale, AZ, USA) fixed to the central portion of the rat skull vault between the bregma and lambdoid sutures. To induce TBI, Marmarou's impact acceleration model was modified and applied; rats were placed prone on flexible foam and were secured in place by using two elastic belts. A Plexiglas tube was then positioned vertically, and the lower end of the tube was centered directly above the pressure sensor. TBI was induced using a 450-g brass weight falling from 1, 1.5, and 2 m through a vertical transparent Plexiglas tube. The impact response was recorded by the load cell fixed on the rat skull. All the signals were measured using the data acquisition system (Biopac MP 36, Santa Barbara, CA, USA). Sham control TBI rats underwent the same surgical procedures, but did not receive weight-drop induced TBI. Body temperature was monitored with a rectal probe throughout surgery, and the temperature was maintained at 37.0 ± 0.5˚C using an adjustable heating pad during recovery from anesthesia. To record the kinematic changes in the head during impact, the impact event was captured using a high-speed video camera (EX-F1, Casio, Tokyo, Japan) (Fig 1). The high-speed video was recorded at 1200 frames/s during each experiment. Furthermore, the linear response of acceleration or deceleration during impact on the rat head was measured using an accelerometer (Model 3225M37, Dytran Instrument Inc., CA, USA) attached on the lateral side of the rat face. Pressure and acceleration signals were digitized and recorded at a sampling rate of 10 kHz by using the Biopac data acquisition system (MP36, BIOPAC System, Goleta, CA, USA). To analyze the changes in displacement, velocity, and acceleration during impact, a highspeed digital camera was used to record the displacement of the rat head during impact. Black and white line markers were painted on the impactor and helmet for subsequent image tracking analysis through video recordings (see S1 Video for demonstration). The changes in displacement (x) of the rat head during impact were determined by tracking the attached impactor and helmeted head by using an image analysis program written in MATLAB. The relationship among displacement (x), velocity (v), and acceleration (a) of the rat head with time (t) during impact can be formulated as follows: Change in velocity (v) of the rat head was calculated by differentiating displacement-time histories of the head from the recorded video data. In addition, the acceleration-time history was calculated using velocity-time histories. Moreover, the acceleration calculated using digital image analysis was compared with that measured by the accelerometer. Thus, peak instantaneous acceleration and deceleration of the rat head were determined using the accelerationtime curve recorded using the digital camera and accelerometer. After surgery, animals were daily monitored for pain and distress. Animals were administered Ketorolac analgesics for 48 h after surgery if pain and distress behaviors were observed. Behavioral assessments Neurological function. The modified neurological severity score (mNSS) is a multifunctional evaluation scale that comprises motor, sensory, reflex, and balance tests. The rats receive one point when they are unable to perform the test or lose the tested reflex. Therefore, the score is proportional to the injury severity. The test scale ranges from 0 to 18 (normal score, 0; maximal deficit score, 18). Beam walking test. The beam walking test was used to evaluate fine motor coordination and balance function [21]. Rats were trained to walk from one end to the opposite end of a narrow plastic beam (80 cm long, 1.5 cm wide) at least five times before formal recording. In each test trial, the animal performance was videotaped, and the average elapsed time to traverse the beam in five trials was then calculated [11]. Western blot We extracted brain tissue samples from the rats 1 day and 7 days post TBI to quantify the injury severity. The rats were rapidly anesthetized with sodium pentobarbitone (60 mg/kg i.p., Sweden) and then decapitated. Samples were extracted from the frontal cortex (FC), corpus callosum (CC), and hippocampus (H). The samples were then lysed with radioimmunoprecipitation assay buffer. For Western blot analysis, protein cell lysates (30 μg) were resolved on 10% sodium dodecyl sulfate polyacrylamide gel electrophoresis gel and transferred to nitrocellulose membranes. Finally, the membranes with blots were incubated overnight with primary antibodies (anti-actin, 1:2000, Chemicon; anti-GFAP, 1:1000, Chemicon; anti-BMX, 1:1000, BD Biosciences; anti-APP, 1:1000, Novus) at 4˚C. The densities of the protein bands were digitized, and the ratio of the ERT density to the beta-actin or alpha-tubulin density was compared among the groups. Experimental design The rats were divided into four groups. A well-trained examiner, blinded for the type of injury, performed all examinations before and after injury. Behavioral alterations were assessed before lesion and on days 1, 4, and 7 post-lesion by using beam walking and mNSS tests. The animals with weight-drop induced were trained and pre-tested for these tasks at least 3 days before TBI lesion to establish baseline data. After training and habituation, all behavioral test sessions were performed at our set time points under the same environmental conditions. After behavioral tests on days 1 and 7 post-lesion, part of rats (n = 3 at each timepoint in each group) were sacrificed for western blot analysis (Fig 2). Statistical analysis For statistical analysis of all behavioral measurements, a two-way repeated measure analysis of variance (ANOVARM) with group (1-, 1.5-, and 2-m weight-drop groups) and time factors (i.e., before lesion and 1, 4, and 7 days post-lesion) was performed. Multiple within-subject comparisons were performed using post hoc least significant difference (LSD) analysis when the main effect of time was significant. Furthermore, we used linear Regression test to determine the coefficient of determination (R 2 ) between the mechanical impact level (i.e., impact force, acceleration and impact height), the severity of brain injury and neurobehavioral function. Biochemistry comparisons between rats in the sham-lesion group and various fall height groups were performed by one-way ANOVA with LSD multiple comparison post hoc test. Data were analyzed using SPSS version 17.0 (SPSS Inc., USA) with the significance level set at p < 0.05 for each assessment. All data are presented as the average standard error of the mean (SEM). Mortality rates Various mortality rates were observed among the groups depending on the fall height. A 50% (10/20) mortality rate was observed following weight-drop induced TBI from 2 m. In the 1.5-m fall height group, a mortality rate of 12.5% (2/16) was observed. No animal deaths occurred following injury in the 1-m fall height group. All of the deaths of experimental animals occur immediately (< 30 min) followed by the induction of brain injury. The impact displacement trace was captured by the high-speed video camera and is presented in Fig 4A. The changes in velocity and acceleration of the rat head were determined by differentiating the linear head displacement-time histories (Fig 4B and 4C). The average peak displacements, velocities, and accelerations for the 1-, 1.5-, and 2-m groups are summarized in Table 1. For behavioral tests, neurological and balance functions were assessed 1 day prior to the weight-drop treatment and 1, 4, and 7 days after the weight-drop treatment. Group results for the three fall height groups are presented in Fig 5A and 5B. Fig 5A illustrates the pre-and post-TBI time-course changes in the mNSS. A two-factor ANOVARM on the mNSS over the 7 days showed a significant time × group interaction (F 6,72 = 4.60, p = 0.001) as well as significant time (F 3,72 = 22.93, p < 0.001) and group (F 2,24 = 5.15, p = 0.014) effects. Post hoc LSD analysis revealed that the mNSS in the 1-m, 1.5m and 2-m group was significantly increased on day 1 and remained consistent up to day 7 after the weight-drop experiment compared with the prelesion level. Regarding the beam walking test, a two-factor ANOVARM revealed significant main effects of time (F 3,66 = 9.218, p < 0.0001) but no significant time × group interaction (F 6,66 = 1.31, p = 0.26). Compared with the pre-lesion level, the beam walking test in 2-m group showed a statistically significant increase on day 1 and remained consistent up to day 7 after the weightdrop lesion, but the 1.5 m group did not reach statistical significance up to day 4 post-lesion. In the 1-m group, no significant differences were found in beam balance test when compared to pre-surgery baseline data (all p > 0.05) (Fig 5B). To understand the relationship between physical impact and graded severity at various weight-drop heights, we then measured the neurological and balance functions on day 1 postinjury to correlate with the impact force, peak to peak acceleration and impact height during weight-drop. The linear repression analysis showed a significant positive correlation between the mNSS score and impact force (R 2 = 0.76, p <0.001), acceleration (R 2 = 0.83, p <0.001) and impact height (R 2 = 0.51, p <0.001) (Fig 6A-6C). We also observed a significant positive correlation between the severity of balance function and the impact force (R 2 = 0.57, p <0.001), acceleration (R 2 = 0.51, p <0.001) and impact height (R 2 = 0.55, p <0.001) (Fig 6D-6F). Because the neurological and balance impairments were observed at various impact levels, the associations of the expression of GFAP, APP, and BMX with trauma severity were examined. A Western blot analysis of the FC, H, and CC lysate revealed an increase in TBI biomarkers (BMX, GFAP, and APP) at 1 (Fig 7A-7C, S1 Fig) and 7 days (Fig 8A-8C, S2 Fig) postinjury. The protein expression of BMX, GFAP and APP increased at 1 and 7 days following impact, and the increase in the levels also differed among the three trauma severity groups. The increase in the BMX, GFAP, and APP upregulation levels upon impact showed statistically significant differences among groups with varied trauma severity at 1 (Fig 7D-7F) and 7 days (Fig 8D-8F) post-injury (p < 0.05, one-way ANOVA, post hoc LSD analysis). To further identify the relationship between physical impact and induced brain injury level at various weightdrop heights. We also performed the correlation tests between the inflammatory expression markers (i.e., APP and GFAP) on day 7 post-injury and impact force, peak to peak acceleration and impact height. The linear correlation analysis showed a significant positive correlation between the protein expression in APP and impact force (R 2 = 0.90, p <0.001), acceleration (R 2 = 0.81, p <0.001) and impact height (R 2 = 0.92, p <0.001) (Fig 9A-9C). A significant positive correlation was also found between the GFAP expression level and the impact force (R 2 = 0.75, p = 0.003), acceleration (R 2 = 0.65, p <0.009) and impact height (R 2 = 0.71, p <0.004) (Fig 9D-9F). These results indicate that higher head impact force or accelerations produced more severe brain injury. Discussion In the present study, we modified Marmarou's impact acceleration model and characterized the kinematic parameters during the weight-drop impact. The changes in impact force on the head, linear acceleration, and displacement of the rat head during impact injury of various severities were recorded and analyzed. Furthermore, we conducted a detailed experiment to identify the relationship between the impact kinematics and the changes in neurological function and motor behaviors by using the modified weight-drop TBI model. Our results demonstrate that controlling impact height and pressure can reliably induce injury of graded severity. The results also showed a highly positive correlation between the behavioral tests and recorded impact parameters. This model may serve as a translational platform for bridging human and animal studies and establishing new therapeutic strategies for TBI. Although Marmarou's impact acceleration model has been widely used in various TBI studies, it has been criticized for not providing highly reproducible results because of the lack of precise control over impact force and animal biomechanics and the lack of precise recording during impact [6,18,22]. Furthermore, the Marmarou impact acceleration model is lacking the precise control and recording during impact, which may result in a high degree of variability, making reproducibility of the injury level difficult to achieve reproduce between among different various laboratories and researchers. Thus, to eliminate the variations and standardize the protocol of this model, particularly to quantify certain aspects of TBI severity, we modified the Marmarou's protocols and measured the weight-drop-induced impact force and acceleration or deceleration during impact. In addition, unlike the clinical setting for the classification of experimental TBI into mild, moderate, and severe levels from histological evidence and functional tests, a weight-drop protocol for inducing various TBI severity levels has less been investigated in previous studies. With the detailed functional behavior and biochemistry analysis, our results demonstrated that the severity of the induced brain injury can be predicted by the precise quantification over impact force and acceleration. Therefore, it is suggested that this protocol can reproducibly and reliably induce graded severity of brain injury as well as graded neurological and motor impairments. Furthermore, where the method and instrument developed in the present study is available, it can be applied to gain the knowledge and offer further insights into understanding the relationship between the pathophysiologic changes following TBI and head mechanical response (e.g., force and acceleration) during impact. Quantification of weight-drop induced TBI in rats Neurological function evaluated using the mNSS to assess the initial disability before brain injury and on days 1, 4, and 7 following weight-drop-induced brain injury at various fall heights (A). The mNSS in 1, 1.5 and 2 m weightdrop group were persistently higher than that of pre-lesion baseline value after TBI over the complete 7-day observation period. Motor balance function was assessed using the beam walking test (B). Pre-and post-TBI mean ± SEM walking ability as measured by time (s) to traverse an elevated beam for various fall heights. The 2-m When a suitable diseased animal model is used, understanding the neurological outcomes after injury is crucial. We employed a time-course analysis of behavioral and biochemical recordings, which highly correlated with the impact force and the kinematic results during impact. We selectively used three traditional fall heights, which were obtained from conditions identical to those in the original impact acceleration model by Marmarou [4,5]. The induction of behavioral and neuropathological changes in our study is similar to that of earlier studies that used identical or similar models [17,18,23]. We observed that the neurological scores and functional beam balance varied with impact force and acceleration; an impact height of less than 1 m produced no or mild impaired functional behaviors, whereas increasing the impact height to 1.5 and 2.0 m significantly increased impaired functional behaviors. Although the induced severity of the current TBI weight-drop model can be determined by the mNSS evaluation conducted 1 day post-lesion, no universal neurological examination system for the brief identification of severity, such as the Glasgow Coma Scale in patients with TBI, has been widely adopted for rats. Therefore, depending on current setting in this study, the mechanical injury parameters such as impact force and induced head acceleration changes in combination with other histological or biochemical evidence and functional tests could provide the most weight-drop group exhibited significant impairment on day 1 and remained consistent up to day 7 after the weightdrop lesion when compared with the pre-lesion baseline value. *p < 0.05, **p < 0.01 as compared to pre-operative values (n = 10 in 1-m weight-drop group, n = 9 in 1.5-m weight-drop group, n = 6 in 2-m weight-drop group). Functional behavior analysis is crucial in TBI research and we demonstrated that the modified weight-drop model may induce the severity-dependent neurological and motor behavioral disturbances. The multiple behavioral and biochemical testing during the course of trauma progression are warranted to reveal the levels of brain injury and motor performance, as well as to determine whether the weight-drop model induced functional deficits are stable or spontaneously recover over time. Whereas previous research has examined the behavioral alterations arising from various falling height of weight-drop method, the time-course changes of neurological and balance function of varying levels of brain injury remained not fully characterized. Thus, we applied general neurological and beam balance test following weight-drop at different time points to observe the changes of functional outcome. One day after TBI lesion, except for the 1-m fall height group, 1.5 and 2-m groups caused temporary motor balance deficits. Over 7 days, although there were spontaneous recoveries for the three fall height groups following TBI lesion, the neurological and balance function still did not reach the pre-surgery level in the 2-m weight-drop group. This implies that following severe weight-drop induced TBI, the neurological and balance function were profoundly altered. The neurological and motor deficits produced by our modified weight-drop methods are consistent with the previous reports demonstrating significant motor impairments following lateral fluid percussion and controlled cortical impact brain injury model [24][25][26][27][28]. In addition to the motor Quantification of weight-drop induced TBI in rats disturbance, although we did not measure the cognitive function after weight-drop lesion, previous studies have reported that the severity of cognitive deficits is related to the impact acceleration injury level [28]. Based on our biochemistry analysis, the neuronal damage throughout the hippocampus was present after weight-drop lesion at two time points examined, suggesting that the impaired cognitive function in rats after weight-drop could be associated with neuropathological changes in the hippocampus. Animal models are essential for studying the biomechanical, cellular, and molecular aspects of human TBI that cannot be addressed in a clinical setting and for developing and characterizing novel therapeutic interventions. We used three markers (BMX, APP, and GFAP) to represent inflammation and axon injury following TBI. Our results support the association of BMX, APP, and GFAP upregulation with trauma severity in rats. Following weight-dropinduced brain injury, an increase in the expression of these proteins in the injured brain area was founded. The association between the BMX, APP, and GFAP expression levels and severity of injury was demonstrated using various fall heights. We found that the amount of neural inflammation and axonal injury varied with impact force and acceleration; a low impact height (1 m) produced little or no axonal injuries and a higher impact height (1.5-2 m) significantly increased these markers of inflammation and axonal injury. Notably, the changes in the levels of these markers were apparent early at day 1 and remained elevated for at least 1 week post TBI-lesion, indicating that the inflammation and axonal injuries appeared in acute or subacute stages of TBI. Furthermore, with the spontaneous recovery, the level of balance function was not significant in the 1 and 2 m weight-drop groups. However, the biochemistry biomarkers 5-, and 2-m weight-drop-induced brain injury in the FC (A), H (B), and CC (C) 7 days after injury. The graph shows significant upregulation of BMX, APP, and GFAP in the 1.5-and 2-m group compared with the 1-m group after weight-drop-induced brain injury. The densitometry values for protein expression as a ratio to actin were normalized to sham-lesion group. Data are represented as the mean ± SEM. *p < 0.05, **p < 0.01, ***p < 0.001 vs. sham-lesion group (n = 3 in each group). https://doi.org/10.1371/journal.pone.0178186.g008 appear to be still sensitive detection methods when compared with the behavioral tests, indicating that the subsequent increases in the levels of TBI biomarkers on day 7 post-lesion also validate the brain injury level. These results suggest that neuroinflammation or axonal damage following weight-drop-induced TBI may become detectable at acute, subacute and chronic time points following injury and may develop in a progressive manner. Consistent with earlier studies, the increased APP, GFAP and BMX metabolites are observed following brain injury using similar Marmarou acceleration weight drop model [17,18,29,30] or other experimental TBI models, such as controlled cortical impact [12] or fluid percussion [24,27] models of brain injury. However, when compared with other TBI models, the changes of TBI markers showed different pattern in the injury site. For example, when performing 1.5 atm fluid percussion injury in rats, it does not cause visible neuronal loss in the hippocampus and neocortex, but gives rise to a robust inflammatory response (as indicated by enhanced GFAP and Iba1 immunoreactivity) in the corpus callosum and the thalamus [24]. For the characterization of time-course changes in TBI markers after brain injury, an earlier study found that the APP immunoreactivity peaked at 1 day, declined at 3day in the cortex, subcortical white matter, external capsule and the hippocampus after central fluid percussion injury in mice. The GFAP reactivity was also observed in cortical regions peaking at 3 days post-injury [27]. Moreover, for the controlled cortical impact injury rat model, a dramatic increase in APP immunoreactivity in the hippocampus and cortex was found at 1 day after lesion and sustained up to 3 days post-injury [31]. The increase in BMX expression level was observed as early as 3 hours and maintained for 3 days or more. In contrast, the level of expression for GFAP increased nearly at 4 days post-trauma [12]. Such discrepancy between studies could be due to the different types of TBI animal model, different severity of the brain injury induction, and the variability of protocols. Although direct comparisons between the earlier published and present results during the course of trauma progression must be made with caution, the expression of such markers may play important roles in the pathogenesis of TBI to represent the levels of inflammatory or axonal injury and thus act as indicators of trauma severity. The present study provides such evidence and shows a significant correlation between the neurologic dysfunction severity and the mechanical impact magnitude. It is noted that the severity of neurological function, beam balance and brain damage at 1 day or 7 days postlesion have been shown to strongly correlate with the increased impact force, acceleration and fall height during impact, as determined by several specific sensors. Furthermore, the impact force and peak to peak acceleration showed better correlation with neurobehavioral function than impact height. Although initial impact height were consistent during impact, this result indicates that traditional weight-drop protocol for inducing graded injury severity by adjusting fall height could still have high variability. It is suggested that the graded severity of brain injury can be predicted by the real time measurement of impact force and acceleration. Otherwise, our results demonstrate that stringent control over impact force and acceleration can reproducibly and reliably induce graded severity of brain injury. Conclusions In the present study, we developed a characteristic analysis method for detecting kinematic data for various severities of TBI in the weight-drop-induced TBI model. By using this model, we showed that various graded impact forces and accelerations can produce graded neurological and motor impairments as well as axonal inflammation and injury. This method entails using newly developed quantitative measures to reduce the variation and increase the reproducibility of the weight-drop-induced TBI model. This method would be useful for studying the pathophysiology of TBI and developing therapeutic strategies for TBI.
Extracting depth information of 3-dimensional structures from a single-view X-ray Fourier-transform hologram We demonstrate how information about the three-dimensional structure of an object can be extracted from a single Fourier-transform Xray hologram. In contrast to lens-based 3D imaging approaches that provide depth information of a specimen utilizing several images from different angles or via adjusting the focus to different depths, our method capitalizes on the use of the holographically encoded phase and amplitude information of the object’s wavefield. It enables single-shot measurements of 3D objects at coherent X-ray sources. As the ratio of longitudinal resolution over transverse resolution scales proportional to the diameter of the reference beam aperture over the X-ray wavelength, we expect the approach to be particularly useful in the extreme ultraviolet and soft-X-ray regime. ©2014 Optical Society of America OCIS codes: (090.1995) Digital holography; (070.7345) Wave propagation; (340.7440) X-ray Introduction In microscopy in general and nanoscience in particular, the extraction of depth information from three-dimensional (3D) objects of interest is often important. In optical microscopy, where numerical apertures are large, the depth-of-field can be of the same order as the lateral resolution limit and 3D objects are predominantly investigated by combining images from different focal depths [1,2]. On the other hand, when using an X-ray probe, the depth of field is typically orders of magnitude larger than the lateral resolution limit, allowing to obtain in good approximation a projection image of a sample. A 3D reconstruction of the object can then be retrieved in a tomographic approach by utilizing a set of projection images from different angular perspectives. These 2-dimensional (2D) projection images can be obtained by different imaging methods such as X-ray full-field microscopy [3], coherent diffraction imaging (CDI) [4,5], ptychography [6] or holography [7]. CDI alternatively allows reconstructing a 3D model of the specimen from a 3D reciprocal-space data set that again was composed out of many 2D diffraction patterns taken from different sample orientations [8,9]. Even though tomographic methods are powerful in that they can achieve a depth resolution as high as the lateral resolution, the requirement of several exposures can be a limitation. This can for example be the case in the study of non-triggerable or nondeterministic dynamical phenomena [10] such as fluctuations in thermal equilibrium or in general in situations when single-shot experiments are indicated, e.g. in order to follow a "diffract before destroy" approach for delicate samples via intense and ultrashort X-ray pulses [11,12]. Two techniques that have proven to be compatible with femtosecond snapshot imaging at the nanometer scale are X-ray holography [10,[13][14][15] and CDI [11,12]. Both imaging methods record the specimen's diffraction pattern in the far-field without any optical elements between sample and detector. When using holographic methods, the specimen's exit wave is reconstructed using the phase information encoded in the diffraction pattern by the interference of object and reference wave [16][17][18]. In the case of CDI, a solution of the phase problem is iteratively retrieved from the sample's diffraction pattern alone. In order to minimize the solution space, phase retrieval algorithms rely on certain constraint conditions which necessitate a priori information about the specimen [4,8,9]. For CDI experiments, it has already been suggested that 3D structures can be determined from a single view. The 2D diffraction pattern needs to be recorded with sufficiently fine sampling and projected on an Ewald sphere within a 3D coordinate space [19]. Therefore, the reconstruction algorithm is computationally intensive and requires many iterations involving 3D Fourier transformations. In contrast, the non-iterative approach demonstrated here is based on a conventional X-ray Fourier-transform holography (FTH) experiment and determines an unambiguous reconstruction by exploiting the information contained in a single hologram recorded on a 2D pixel detector. The holographic reconstruction of the wave field in the plane of the reference source allows to propagate the angular spectrum of the reconstructed object wave along the beam axis [18,20] and thereby refocus specimen features outside the depth-of-field. The finite depth-of-field resulting from the scattering geometry can be used to measure the distance between features along the beam axis as has been demonstrated for holography experiments using soft X-rays [21] and extreme ultra-violet (EUV) radiation [22,23]. In this work, the depth information is used to reconstruct a complete 3D model of a specimen. The method can be seen as an analogue to digital holographic microscopy (DHM) [2,24] as the position of different features along the optical axis (i.e. the longitudinal position) is determined by bringing the feature into focus. Similar to DHM and contrary to non-holographic methods, this information is retrieved post-experimentally from a single Fourier-transform hologram, which can hence be recorded in a single exposure if required. As specimen we use an artificial, extended 3D test structure whose longitudinal dimension exceeds the optical setup's depth-of-field. From the hologram reconstructions at different longitudinal coordinates, the displacements of the specimen features from the sample substrate are measured and transferred into a 3D model reconstruction. A suitable specimen feature is utilized to determine the transversal and longitudinal resolution. The results are compared to the theoretical prediction. Theory In soft-X-ray FTH, the hologram is recorded in the far-field as an interference pattern of the beam scattered by the object with an unperturbed reference beam [16][17][18]. The field of view is usually restricted by an aperture, fabricated into an X-ray-opaque metal-film mask that is produced in close proximity to the sample substrate. The reference beam allowing for phaseencoding typically originates from a small pinhole adjacent to the object fabricated into the same mask [17], while the use of more complex reference beams is also possible [13,25]. This integrated mask design monolithically couples the source of the reference wave to the object to be imaged and generates the remarkable stability of FTH against drift and vibration during exposure. Since no optical elements are blocking the space around the integrated mask, FTH provides flexibility to realize a variety of sample environments [26,27] and in particular the possibility to easily reach the sample with an optical pump [28]. In addition, the off-axis geometry of the reference wave source from the object effectively spatially separates the twin-images occurring in the hologram reconstruction. Due to these advantages and its simple experimental design, FTH with X-rays is routinely applied in numerous experiments such as relating to imaging nanomagnetic structures [26][27][28][29][30][31][32] or biological specimen [7,13,21,33]. In the single pinhole reference case further considered here, the image reconstruction is retrieved by a 2D spatial Fourier transformation of the recorded hologram and results in a 2D information of the amplitude and phase of the object's exit wave, i.e. one obtains the 2D, complex wavefield leaving the object. The in-focus contribution of this wavefield is obtained from the plane parallel to the detector which includes the reference source. This 2D reconstruction shows optimally resolved features for the portion of the specimen that has a sufficiently small longitudinal separation from this plane of the reference source, i.e. features within the depth-of-field. However, the holographic reconstruction of the wave field in the plane of the reference source allows to propagate the angular spectrum of the reconstruction [18,20] along the beam axis and thereby refocus specimen features outside the depth-of-field. It has been shown that the plane of least confusion can be separated from the plane of the reference by numerically propagating the reconstructed specimen wave front according to the physical separation between both planes [21,25]. Similarly, a thoroughly focused reconstruction of an extended 3D object can be obtained by numerically propagating the reconstructed wave fronts of individual object features outside the depth-of-field into focus. Additionally, the finite depth-of-field can be used to measure the position of a particular feature along the optical axis by examining the feature's reconstruction for different propagation lengths [22,23]. We thus consider the depth-of-field as the longitudinal resolution limit that allows a clear distinction between features that are closely displaced along the optical axis. The application of the free-space propagator requires a homogeneous wavelength throughout the object, i.e. the refractive index is assumed to be constant. For large spatial deviations of the specimen refractive index within the propagation distance, the actual wave front is distorted and might deviate from the numerically retrieved reconstruction. Generally, in the X-ray regime, the refractive index is close to unity [34] and spatial distortions caused by an inhomogeneous refractive index within one attenuation length are expected to be much smaller than the longitudinal and lateral resolution limits. In the following we assume that the geometry of the FTH experiment allows to record sufficiently large scattering angles such that the limit for the lateral spatial resolution is set by the lateral size of the reference aperture and not by the maximum momentum transfer. The phase information in the hologram is encoded by the object-reference interference and is accessible only up to scattering angles with sufficiently high fringe contrast, i.e. up to scattering angles with adequate reference and object beam intensity. For a circular reference pinhole of diameter d, illuminated by a plane wave, the intensity distribution of the reference beam in the mask plane is a circular disk with constant intensity. In the far-field, the reference beam intensity on the detector is given by the Airy pattern. Typically, image information is gathered predominantly for scattering angles within the radius of the central Airy disk, still containing sufficient intensity of the reference beam. Considering the Airy disk's first minimum as the upper limit for the definition of the system's effective numerical aperture (NA eff = 1.22 λ/d), both the lateral resolution Δr and depth-of-field are defined by the diameter of the reference pinhole. For illumination with radiation of wavelength λ, the modulus of the depth-of-field (Δz = ± λ / 2NA 2 ) [34] can be assessed by substituting NA with NA eff : Remarkably, in the case of reference-limited FTH, the lateral resolution only depends on the size of the reference aperture (Δr ≅ d) while the depth-resolution additionally depends on the wavelength of the illumination and improves for larger wavelengths. This at first sight counterintuitive effect is the result of the increasing divergence of the reference beam with increasing wavelength. Please note, that relation (1) represents the lower boundary of the depth-of-field given by the first minimum (with zero intensity) of the diverging reference beam. For real detection systems affected by noise and limited photon statistics, the maximum detected scattering angle with sufficient signal-to-noise ratio and the corresponding NA eff will be smaller resulting in a larger depth-of-field than the given boundary. Experiment The 3D test object was produced together with the holographic mask on a 350 nm thick silicon nitride membrane coated with a 1.3 µm gold film. In a first step, an object aperture of 4.4 µm in diameter was milled into the metal film utilizing a focused ion beam (FIB). Subsequently, the reference pinhole was produced at a distance of 10 µm from the centre of the object aperture with an exit diameter of 80 nm. The actual test structure was fabricated by FIB-assisted deposition of platinum (Fig. 1). The sample includes a diagonal ramp that extends over the object aperture and a group of five differently shaped bodies deposited within the object aperture on the silicon nitride substrate. The ramp is 1.1 µm wide and has an inclination angle of 45° relative to the substrate. Apertures of different shape were milled into the ramp corresponding to four different elevations above the substrate. The mask-based FTH experiment was performed at the undulator beamline U41-PGM at the BESSY II synchrotron source, in a configuration as described in Ref [17]. The sample was placed 220 mm upstream of a back-illuminated charge-coupled device (CCD) with 2048 × 2048 pixels (pixel size 13.5 μm). Soft X-rays with a photon energy of 400 eV corresponding to a wavelength of λ = 3.1 nm were used to illuminate the sample. The hologram was recorded by accumulating 200 frames with 250 ms exposure time each. In this configuration we expect a lower boundary for the depth-of-field of |Δz| > 693 nm in the reconstruction. The direct beam was blocked by a circular beamstop to reduce the dynamic range of the scattering signal to the technical capabilities of the CCD detector. The missing intensity at low scattering angles corresponds to high-pass filtering of the hologram. In addition to the first measurement, the centre of the diffraction pattern was recorded in a second exposure without beamstop, but using a 150 nm thick gold filter to attenuate the beam. In order to decrease the CCD readout time, the region of interest was limited to a 200 × 200 pixel matrix which was centred in the hologram. The centre was recorded by accumulating 1000 frames with 0.9 s exposure time each. In the final single-view hologram, both exposures where patched together after matching the intensities in the overlap region. Fig. 2. Fourier-transform holography experiment. (a) Scheme of FTH setup. With a coherent X-ray beam incident from the left side, the holographic mask defines a reference wave and the object illumination. The blue curve on the right side corresponds to the far field as recorded on the detector and shows the normalized intensity (I norm ) profile of the phase encoding reference beam which originates from the small pinhole of diameter d in the mask plane. (b) Coherent diffraction pattern from the 3D test structure in Fig. 1 constituting a Fourier transform hologram, recorded with soft X-rays with λ = 3.1 nm. The intensity is plotted in logarithmic scale. The white circle has a radius of q zero = 96 µm −1 momentum transfer and corresponds to the first minimum of Airy pattern of the reference pinhole. The evaluation of depth-of-field and smallest resolved spot size in the reconstruction relies on a small real space feature that corresponds to a momentum transfer of q max = 73 µm −1 indicated by the red circle. The patched hologram shown in Fig. 2(b) was reconstructed by a 2D Fourier transformation after zero-padding the hologram to 4096 × 4096 pixels. The focus was shifted along the beam axis by applying the free-space propagator to the reconstruction as described in Refs [18,20]. Results Figure 3(a) shows the reconstruction of the patched hologram without application of the free space propagator. By default, the focus is in the plane of the reference-wave source, i.e. in the mask plane. This initial reconstruction, hence, corresponds to a zero propagation length. As seen in Fig. 3(a), features on the bottom of the object hole (marked by arrows) are reproduced without defocus blurring while the ramp is out of focus. Depending on the longitudinal separation from mask plane, the ramp shows ever more pronounced diffraction fringes towards the top. At the photon energy utilized for illumination, the object shows strong absorption without significant phase shift. For a numeric identification of the refocus distance of a particular feature from the mask plane, suitable focus criteria thus rely on an analysis of the reconstructed amplitudes [35][36][37][38]. Numerically indentifying the optimal focus is especially promising for a single extended focal plane with many features [35,36], or for small objects that are well separated [37]. In our reconstruction a numerical focussing was hampered by two aspects: (i) For extended features that are permeating different focal planes (the platinum ramp), the common numeric approaches would rely only on a limited amount of pixels that can be brought into focus at a time. (ii) The in-focus part of the specimen is often superimposed by fringes of nearby features of a different focal plane. Additionally, Fourier transform holograms are often high-pass filtered via use of a central beam block (in contrast to this work). In these cases, edge ringing is amplified in the reconstruction of high-pass filtered holograms and a numeric evaluation is even more challenging. On the other hand, the optimal propagation length can conveniently be determined by a visual inspection of the reconstruction. By observing the reconstruction for different propagation lengths, fringes of defocused edges are seen to converge for the propagation distance approaching the separation between the particular feature and the mask plane. It is thus possible to reliably find the optimal propagation length of sharp features without any a priori knowledge. For the evaluation of the recorded hologram data, the visual approach delivered results that were superior over numeric methods relying on an analysis of the variance [38], modulus [35] or entropy [36] of the reconstructed amplitude. In Fig. 3(b) we present the reconstructed wave field after numerically propagating the initial reconstruction 6 µm upstream. This distance corresponds to a separation from the mask plane that shifts the lower part of the ramp into focus. As a result, the height markers on the low side of the ramp are in focus and clearly resolved while features at the bottom of the object hole and at the top of the ramp are defocused. The associated propagation length was found by minimizing fringes around the features of interest. The tip of the ramp including the markers located at this position can be brought into focus by propagating the initial reconstruction 9 µm upstream (Fig. 3(c)). This value is in agreement with the total longitudinal dimensions of our sample as determined via scanning electron microscopy. The lateral resolution and the depth-of-field of the reconstruction were evaluated in a certain region of interest (ROI) on the ramp indicated by the white box in Fig. 4(b). Features in the 1µm × 1µm ROI as magnified in Fig. 4(c) are focused at a propagation distance of 7 µm. The lateral resolution was determined by evaluating the pixels indicated by a red line through the fine central slit. In the scanning electron micrograph, the width of the slit was measured to 43 ± 7 nm. Figure 4(a) shows the normalized values (real part) of the corresponding pixels in the reconstruction. The smallest reconstructed feature size in lateral direction, corresponding to the full width at half maximum (FWHM) as determined from Fig. 4(a) is 64 nm. In the reconstruction, the measured feature size corresponds to the finite width of the fine slit which is convoluted with the FTH point spread function. Considering Gaussian transmission profiles of slit and reference, we estimate a resolution limit of 48 ± 8 nm (FWHM). In Fig. 4(d) we show the reconstructed wave field (real part) along the slice marked by the red line in Fig. 4(c) for different numerical propagation distances of the reconstruction. The data plotted in Fig. 4(a) is thus identical to a transverse line profile at zero propagation distance in Fig. 4(d). A longitudinal line profile (dotted blue line) is extracted on the right side of Fig. 4(d). The depth-of-field, identified by a 20% drop of the reconstructed real part [34], is determined to be ± 1.2 µm. From the quantitave assessment of the longitudinal position of the reconstructed features, we are able to compile a 3D model of our object. As in-focus features are distinguished visually by the appearance or disappearance of fringes at their edges, we aim to identify these edges at their respective longitudinal height in a second step. The edges were found by computing the variance of the real parts in a 9 × 9 pixel matrix (corresponding to 225µm × 225µm) that was shifted pixelwise across the whole reconstruction resulting in 2D variance map. These variance maps were calculated for different propagation lengths. The presence of edges within the depth-of-field results in a strong increase in variance [38] and corresponding pixels could be isolated by simply thresholding the variance map. The resulting wireframestyle 3D model maps the specimen edges at their respective longitudinal positions and is presented in Fig. 5(a). A visualisation of the 3D surface topography information in a 2D plot is illustrated in Fig. 5(b). Here, we have combined reconstructions for different propagation lengths into an entirely focused, color-coded reconstruction. The color overlay in this reconstruction corresponds to the longitudinal height of the particular feature while the local image brightness still corresponds to the reconstructed specimen wave field (real part). The longitudinal feature positions are taken from the model presented in Fig. 5(a) and were interpolated between feature boundaries. The longitudinal position of edges from strongly absorbing features as plotted in Fig. 5 corresponds to distance from the FTH mask where the respective edge first intercepts the beam. Once a strong absorber brings the beam to extinction, no information on objects shadowed in this way can be obtained. This is a principal limitation and fully analogous the situation encountered when recording focus series in microscopy, e.g. using visible light. For this reason, specimen that are best dedicated for this 3D imaging method show features that are well separated or semi-transparent. As expected from the geometry of the experimental setup, the lateral resolution is far superior to the longitudinal resolution limit. The measured depth-of-field in the experiment deviates considerably from the predicted lower boundary of relation (1) for two reasons. Firstly, the feature used for the evaluation of the depth-of-field is not point-like but has a finite width of 43 ± 7 nm. In the reconstruction, the intensity profile through the slit is wider than the profile of the point spread function of the FTH imaging process. As a consequence, the depth-of-field of the FTH setup is overestimated. Secondly, the detection of the hologram is affected by noise of the CCD and the limited photon statistic, in particular, at high scattering angles (cf. Fig. 2(b)). The phase-encoding interference signal on the detector is not recorded with sufficient signal-to-noise ratio up to the given zero-intensity boundary of the reference wave's Airy disk at q zero = 96 µm −1 (white circle in Fig. 2(b)). Instead, the effective maximum scattering angle up to which the hologram is modulated will be smaller, resulting in a larger depth-of-field. The effective maximum momentum transfer of q max = 73 µm −1 considering both the finite feature width and the detection limitations is shown as red circle in Fig. 2(b). It can be concluded that the true effective depth-of-field of the FTH setup considering hologram signal-to-noise (and not considering the limitations of the finite test feature) lies between 0.7 µm and 1.2 µm. Relation (1) for Δz above also suggests that the longitudinal resolution can be improved by using lower photon energies while still maintaining sub-100 nm lateral resolution. For instance, the lower boundary for the longitudinal resolution reduces to Δz = ± 159 nm when the FTH mask in this experiment would have been illuminated with a wavelength of λ = 13.5 nm, i.e. a prominent wavelength for destructive single-shot experiments at the free-electron laser in Hamburg (FLASH) [11,39]. Conclusion We demonstrate the extraction of depth information from a single soft-X-ray Fouriertransform hologram. Our approach allows a precise and intuitive measurement of longitudinal displacements of object features, analogous to a focus series "through an object" in light microscopy. Using this information, we are able to reconstruct a 3D model of a test object. In the reconstruction, we estimate a lateral resolution of 50 nm and a longitudinal resolution of ± 1.2 µm. The depth resolution can significantly be improved by either increasing the wavelength of the X-ray illumination or by further reducing the size of the reference wave source, given that the coherent photon flux and the resulting signal-to-noise ratio in the hologram is not a limiting factor. As the method is compatible to FTH with more complex reference structures for increased efficiency [13,25], a reduction of the reference wave intensity when reducing the reference aperture diameter can be compensated to optimize the hologram fringe visibility. Given the limitations of the approach demonstrated here as compared to obtaining 3D information via tomography [3,7], we anticipate the application of this 3D imaging method especially for weakly absorbing samples, e.g. for research in life science, in situations where the acquisition of many projection images is not possible. In particular, this is the case when the sample is structurally changing in time due to dynamic processes such as inherent or triggered dynamics or radiation damage. In this case, our approach allows to obtain 3D information on the sample from a single-view snapshot hologram in a diffract-before-destruct [11,12] approach.
A conserved immune trajectory of recovery in hospitalized COVID-19 patients Many studies have provided insights into the immune response to COVID-19; however, little is known about the immunological changes and immune signaling occurring during COVID-19 resolution. Individual heterogeneity and variable disease resolution timelines obscure unifying immune characteristics. Here, we collected and profiled >200 longitudinal peripheral blood samples from patients hospitalized with COVID-19, with other respiratory infections, and healthy individuals, using mass cytometry to measure immune cells and signaling states at single cell resolution. COVID-19 patients showed a unique immune composition and an early, coordinated and elevated immune cell signaling profile, which correlated with early hospital discharge. Intra-patient time course analysis tied to clinically relevant events of recovery revealed a conserved set of immunological processes that accompany, and are unique to, disease resolution and discharge. This immunological process, together with additional changes in CD4 regulatory T cells and basophils, accompanies recovery from respiratory failure and is associated with better clinical outcomes at the time of admission. Our work elucidates the biological timeline of immune recovery from COVID-19 and provides insights into the fundamental processes of COVID-19 resolution in hospitalized patients. Here, we investigated intra-patient immunological changes across clinically relevant time points to identify changes in immune responses that accompany effective COVID-19 resolution. We obtained longitudinal peripheral blood samples (n = 230) from hospitalized COVID-19 patients, SARS-CoV-2 negative ventilated patients, and healthy individuals. To investigate changes in immune cell signaling states over time, we utilized mass cytometry with a unique panel of antibodies specific for immune cell phenotyping and for measuring phosphorylated cell signaling proteins. We identified distinct immune cell composition and signaling states in COVID-19 patients compared to COVID-19 negative patients and healthy individuals. Additionally, we discovered a conserved and coordinated immune response that accompanies COVID-19 resolution and hospital discharge. Furthermore, these and other features were relevant to resolution of the most severe mechanically ventilated patients, and these immune cell states correlated with better clinical outcomes at time of admission. Our findings indicate that, although patients have heterogeneous immunological baselines and highly variable disease courses, there exists a core immunological trajectory that defines recovery from severe SARS-CoV-2 infection. Our results provide a working model of a successful immune response trajectory among patients with COVID-19 requiring hospitalization, deviations from which are associated with extended hospitalization and mortality. Longitudinal peripheral blood sampling from hospitalized COVID-19 positive and negative patients To investigate the composition of circulating immune cells and the cell signaling states that characterize SARS-COV-2 infections and distinguish it from other respiratory infections, we collected longitudinal peripheral blood (PB) samples from COVID-19 patients and COVID-19 negative patients (PCR negative for SARS-COV-2) admitted to UCSF Medical Center and Zuckerberg San Francisco General Hospital. PB samples and corresponding patient demographics and clinical parameters, e.g. World Health Organization (WHO) severity scores (World Health Organization 2021a), ventilation duration, and hospital length of stay, were collected throughout inpatient care (Table S1 and S2). PB samples from healthy individuals (n = 11) were obtained as controls (Table S3). All samples were processed, stained, and analyzed by mass cytometry to quantify the expression of 30 protein markers and 14 phosphorylated signaling molecules (Table S4). Samples that met our quality control standards (methods) were normalized across batches to obtain our final cohort of 230 samples; 205 samples from 81 COVID-19 patients, 14 samples from 7 COVID-19 negative patients, and single samples from each of 11 healthy individuals ( Figure 1A and S1A). COVID-19 patients were classified into COVID-19 severity groups based on their WHO score at day of sampling (3: mild, 4: moderate, 5-6-7: severe) (World Health Organization 2021a). Based on the phenotypic markers in our antibody panel, we manually gated 38 canonical immune cell populations ( Figure S1B) and evaluated immune cell population frequencies, protein expression patterns, and immune cell signaling pathways specific to COVID-19 course escalation and resolution. significant frequency changes across almost all manually gated immune cell populations ( Figure 1C) (Mathew et al. 2020). To determine modules of immune changes, we evaluated if distinct immune cell populations correlate with each other as well as with patient demographics or clinical parameters. We found a coordinated adaptive immune response in which several T cell subsets and B cell frequencies were positively correlated with one another ( Figure 1D). In contrast, the innate arm demonstrated a dichotomous relationship, with neutrophil and monocyte frequencies being anti-correlated. Additionally, monocyte frequencies at day 0 were positively correlated with T cell subsets and negatively correlated with ventilation duration ( Figure 1D), suggesting there may be a coordinated immune response associated with better clinical outcome. Monocyte and neutrophil composition reveal unique compartmental shifts in innate immune arm of COVID-19 infection Large shifts in innate immune compartments were evident between COVID-19 patients, patients with other respiratory infections, and healthy controls ( Figure 1B); therefore, we further investigated the composition of neutrophils and monocytes. While neutrophil frequency was not significantly different between COVID-19 patients and the healthy individuals ( Figure 1C and S1C), we found that a variety of proteins were altered in their expression on neutrophils across groups. Neutrophils from COVID-19 patients exhibited significantly increased expression of CD11c, CD14, CD16, and PD-L1, suggesting a highly activated and inflammatory neutrophil phenotype in COVID-19 patients ( Figure 1E). Additionally, while the frequency of all monocytes was comparable between groups ( Figure 1C), composition of monocyte subsets (defined as classical, intermediate, and non-classical) was significantly different between patients with COVID-19 and other respiratory infections compared to healthy individuals. Patients exhibited a significant increase in the frequency of intermediate monocytes along with a relative decrease in classical monocytes ( Figure 1F). Cross-sectional analysis of COVID-19 severity groups reveals few immunological features that distinguish severity states, requiring a new approach to evaluating immune trajectories in our patient cohort. Having established the major differences between COVID-19 patients, COVID-19 negative patients, and healthy individuals at D0, we turned to evaluate the immunological differences between COVID-19 severity groups across time ( Figure S1D). Surprisingly, we found no significant differences between severity groups at D0 and only few population differences at D4 and D7 ( Figure S1E and S1F). Within each severity group, comparisons across time showed that plasmablasts contract from D0 to D7 in the majority of severe COVID-19 patients ( Figure S1G), while CD4 activated T cells are upregulated from D0 to D7 in mild COVID-19 patients ( Figure S1H). The paucity of differences between severity groups suggested that perhaps significant variability exists in the timing of disease escalation and resolution across individuals and therefore the immunological processes that mediate these changes over time. Early, coordinated, and activated immune cell signaling states in COVID-19 patients To gain insights into key immune cell signaling modules associated with COVID-19, we measured the phosphorylation state of 14 signaling molecules across all immune cell subsets ( Figure 2A). First, we evaluated the median level of phosphorylated signaling proteins across all CD45+ hematopoietic PB cells in COVID-19 positive, COVID-19 negative, and healthy individuals at D0. Differential expression analysis revealed five signaling molecules (pSTAT1, pPLCg2, pZAP70/pSyk, pCREB, and pSTAT3) that were upregulated in COVID-19 patients compared to healthy individuals ( Figure 2B). To determine if a specific cell type was driving the higher signaling state in COVID-19 patients, we evaluated the median phosphorylation state of the respective signaling molecules within all manually gated 4 . CC-BY-ND 4.0 International license made available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is The copyright holder for this preprint this version posted March 16, 2022. immune cell subsets. We found significantly higher median signaling across the majority of cell subsets, showing that immune cell signaling states are coordinated across most cell types simultaneously and not driven by signaling within a specific cell type ( Figure S2A). To investigate coordinated signaling modules in CD45+ cells, we correlated the expression of signaling molecules at D0. For COVID-19 patients, we observed a coordinated positive signaling response ( Figure 2C), while this coordination was absent in patients with other respiratory infections or sepsis ( Figure 2D). Additionally, we correlated signaling molecule expression to hospital length of stay and ventilation duration ( Figure 2C) and found significant negative correlations between these parameters and levels of pSTAT3 ( Figure 2E) and pSTAT6 ( Figure 2F), suggesting that higher pSTAT3 and pSTAT6 signaling at time of admission corresponds to better clinical outcomes. Finally, we evaluated signaling differences within and across severity groups at D0, D4, and D7, but observed no significant changes ( Figure S2B and S2C). Conserved immunological processes and changes in cell signaling states accompany disease resolution and discharge Although cross-sectional analysis can provide insights into the immunological state of COVID-19 patients and severity groups, the natural heterogeneity of patient immune responses and significant differences in their disease time courses may obscure immunological processes that mediate recovery. Therefore, we aimed to identify conserved changes within patients, over time, that are tied to clinically relevant outcomes. Given that the majority of our patients successfully recovered from the infection, albeit after differing lengths of hospitalization, we investigated immunological changes that occurred within patients from time of admission (tp1) to time of discharge (tp2) from the hospital (n = 32) ( Figure 3A and S3A). For this analysis, we included patients who were discharged within 30 days of admission across all disease severity states at time of enrollment, allowing us to identify conserved features among all COVID-19 patients who successfully recover. A variety of immune cell subsets significantly changed in frequency between tp1 and tp2 ( Figure 3B). Monocytes as well as activated CD4 and CD8 T cells significantly increased at the time of discharge (tp2) as patients resolved the infection ( Figure 3C). Conversely, neutrophils and conventional type 1 dendritic cells (cDC1s) significantly decreased in frequency by time of discharge ( Figure 3C). For most COVID-19 patients, the overall composition of immune cells became more similar to that of healthy individuals at the time of discharge compared to the time of enrollment ( Figure 3D). However, some immune cell populations exhibited deviations away from healthy at the time of discharge, most notably activated CD4 and CD8 T cells (CD38+ HLA-DR+) as well as monocytes ( Figure 3E). This indicates that the immune state at the time of discharge is characterized by the restoration of certain elements of the immune response that were perturbed early in infection alongside a continued immunological process that proceeds past the time patients stabilize for discharge. Patients who successfully resolve COVID-19 have robust pan-hematopoietic signaling and cytotoxic activated T cells at day 0 To obtain more granular insights into the immunological perturbations that accompany COVID-19 recovery, we evaluated phenotypic changes and signaling dynamics within immune cell populations that changed during disease resolution. We focused on cell populations whose frequencies move away from levels observed in healthy controls, indicating they continue to have a dynamic response during infection resolution. Activated CD4 and CD8 T cells exhibited a reduction in the expression of GranzymeB and CD45RA as patients transition from early infection to discharge ( Figure 3F and S3B), consistent with a transition from more activated effector cells to more of a memory phenotype. Interestingly, we also observed a significant change in the phenotype of circulating monocytes, which 5 . CC-BY-ND 4.0 International license made available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is The copyright holder for this preprint this version posted March 16, 2022. expressed high levels of PD-L1 at time of admission but higher levels of CD4, CD11c, and HLA-DR at time of discharge ( Figure 3G, 3H, and S3B). Similarly, we observed a reduction in PD-L1 expression on neutrophils at time of discharge ( Figure S3B). We then analyzed the median values of phosphorylated signaling molecules within the relevant immune cell subtypes to evaluate changes in cell signaling during this resolution phase. A variety of cell signaling proteins were significantly downregulated within the key immune cell populations at time of discharge ( Figure 3I). Several signaling molecules changed in a coordinated fashion across different immune cell types (e.g. pTBK1, pERK, and pSTAT3), with the broadest signaling changes observed in activated CD8 T cells and monocyte subsets ( Figure 3I and 3J). These observations are consistent with previous studies describing the relationship between IL-6 expression and pSTAT3 signaling and subsequent upregulation of PD-L1 in monocytes ( Figure 3F and 3K) (W. Zhang et al. 2020). Although signaling trajectories trended in the same direction among most patients ( Figure S3C), we did not observe a clear trend towards healthy individuals ( Figure S3D), likely explained by the expression variability and difficulty of measuring signaling molecules in rare populations of healthy cohorts e.g. activated CD8 T cells ( Figure 3E). Taken together, our results suggest that a coordinated set of changes in immune cell abundances and signaling states occur in patients who successfully resolve COVID-19. Immune features associated with COVID-19 resolution are absent in patients who are hospitalized for more than 30 days or die from COVID-19 To determine if the immune features identified in the resolution phase are specific to patient recovery, we analyzed patients who had delayed disease resolution, i.e. who remained hospitalized for more than 30 days ("late discharge"; n = 6 patients) or who died from COVID-19 ("ultimately deceased"; n = 5 patients) ( Figure 4A and S4A). First, we evaluated the immune cell population changes occurring within these patients over a similar period from the time of admission, but found no significant changing populations for either group ( Figure S4B). We asked if the lack of immune remodeling between these timepoints was due to a reflection of an insufficient initial response or, alternatively, a sustained immune response that failed to resolve. In fact, both baseline immune cell frequencies at the time of admission and the magnitude of their changes were different, though in different ways for different elements of the immune response ( Figure 4B, S4C, S4D, and S4E). In deceased patients, neutrophil frequencies were excessively elevated at both tp1 and tp2, while monocytes started at a lower frequency and failed to reach levels comparable to resolving patients ( Figure 4B and S4F). Activated CD8 T cells were present at similar abundances across groups at the time of admission but became much more abundant in late discharge and ultimately deceased patients ( Figure 4B and 4C). In contrast, activated CD4 T cells were already more elevated in late discharge and ultimately deceased patients at the time of admission and became even more elevated over time ( Figure 4B). An increase in the abundance of cDC1s was notably absent in ultimately deceased patients at the time of admission, while they were substantially more elevated in late discharge patients ( Figure 4B). Elevated cell signaling at time of admission is associated with COVID-19 resolution Next, we evaluated signaling dynamics in late discharge and ultimately deceased patients to determine if observed changes in cell frequencies were accompanied by dysfunctional signaling. In contrast to patients resolving COVID-19 in <30 days, which exhibited consistent changes from high to low signaling states over time, we observed no significant changes for late discharged and ultimately deceased patients ( Figure 4D, S4G, S4H, and S4I). Instead, these patients exhibited discoordinate signaling directionality in activated CD8 T cells ( Figure 4E), a complete lack of pS6 signaling in cDC1 cells ( Figure 4E), and less signaling at tp1 6 . CC-BY-ND 4.0 International license made available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is The copyright holder for this preprint this version posted March 16, 2022. ;https://doi.org/10.1101https://doi.org/10. /2022 across monocyte subsets ( Figure 4F). Interestingly, when the late discharged patients are within 30 days of discharge, the trajectory of several immune resolution features, e.g. monocytes, neutrophils, and signaling molecules, resembles the recovery trajectories in patients hospitalized <30 days, suggesting that the resolution phase engages in these patients as well before they are discharged ( Figure 4G and S4K). Taken together, these results indicate that late discharge and ultimately deceased patients exhibit reduced immune cell signaling at the time of hospitalization. While some of these cell signaling pathways became elevated at later time points in these patients, others were not changing at all. Furthermore, these results suggest that the immune processes observed during resolution through discharge are specific to a successful response against COVID-19. Core immune resolution features characterize COVID-19 patients recovering from ventilation Having established immune features that accompany COVID-19 resolution among our entire patient cohort, we next examined the immunological changes within only the most severe patients who required mechanical ventilation ( Figure S5A). We analyzed immunological changes between three key time points; the first time point after a patient was intubated (tp1), the last time point before they were extubated (tp2), and the first time point after a patient was successfully extubated (tp3) ( Figure 5A). This allowed us to evaluate the immunological dynamics that occur during ventilation (tp1 vs tp2), and during successful recovery from intubation (tp1 vs tp3). First, we analyzed the within-patient immune cell frequency changes between tp1 and tp3 (n = 9, S5B). Consistent with patients resolving COVID-19, monocytes and activated CD4 and CD8 T cells significantly increased in frequency, while neutrophil frequency decreased during ventilation resolution ( Figure 5B and 5C). Additionally, ventilation resolution was characterized by an increase of CD4 regulatory T cells (Tregs) and basophils at time of recovery ( Figure 5B). These changes collectively were associated with a coordinated trajectory of recovery from tp1 to tp3 ( Figure 5D). Despite these coordinated changes, patients did not return to an immune composition comparable to healthy donors, indicating that the time of extubation remains an active immunological phase of disease resolution from the most severe form of COVID-19. Some key immune cell populations that remain different from healthy controls included both activated CD4 and CD8 T cells as well as Tregs ( Figure 5E and S5C). Of these changes, only the observed increase in activated CD8 T cells was apparent within patients during intubation (tp1 vs tp2), suggesting that additional dynamic changes are specific to the resolution of severe COVID-19 ( Figure S5D and S5E). COVID-19 ventilation recovery is associated with T cell and monocyte phenotypic changes and a transition from pSTAT to pCREB dominated signaling Next, we further analyzed changes in immune cell activation and cell signaling dynamics that accompany ventilation resolution. Consistent with recovery trajectories in patients resolving COVID-19, activated CD8 T cells expressed higher levels of HLA-DR and lower levels of CCR7 at the time of extubation ( Figure 5F), while neutrophils expressed lower levels of PD-L1 ( Figure S5F). Additionally, while there was no difference in monocyte subset frequencies ( Figure S5G), non-classical (CD16+) monocytes exhibited a shift from a CD64+ PD-L1+ phenotype during ventilation to a CD4+ CD11c+ HLA-DR+ activated monocyte phenotype at the time of extubation ( Figure 5G, 5H, and 5I). CD64+ expression on non-classical monocytes were incrementally decreased between tp1 and tp3 demonstrating a progressive downregulation during the resolution phase ( Figure 5I). pSTAT3, and pSTAT5 signaling was evident in CD4 Tregs, basophils, and activated CD8 T cells ( Figure 5J, 5K, 5L, and S5H). Conversely, pCREB signaling was significantly increased after extubation (tp3) in CD4 Tregs and non-classical monocytes ( Figure 5J, 5K, 5L, and S5H), suggesting there is a transition from inflammatory cytokine signaling response to pro-survival signaling within these cells, specifically. Visualizing these signaling trajectories in PCA space revealed a coordinated trajectory of immune cell signaling that accompanies extubation across patients ( Figure 5M), though signaling states remained distinct from those in healthy individuals ( Figure S5I). Taken together, our analyses identify a conserved set of immunological processes that are consistent among patients who recovered from mechanical ventilation as a result of COVID-19, elucidating an additional layer of immunological changes that are unique to these patients compared to recovery in patients who did not require mechanical ventilation. Core immune resolution features define patients with better clinical outcomes at time of admission Having identified a signature of immune remodeling during COVID-19 recovery, we next investigated if the early presence of these features were associated with better patient outcomes. We evaluated the immune composition of severe COVID-19 patients before or on the day they were ventilated (vent, n = 13) and compared it to the immunological state at time of admission (D0) for patients who never required ventilation (no vent, n = 50) ( Figure 6A and S6A). Differential abundance analysis of immune cell frequencies revealed higher frequencies of monocytes and CD4 Tregs, as well as decreased neutrophil frequencies, in patients who never required ventilation ( Figure 6B and 6C). Similar results were obtained when exclusively analyzing samples collected prior to ventilation (vent, n = 8) ( Figure S6B and S6C). Patients who never required ventilation exhibited an immune state more similar to those of the healthy controls ( Figure S6D). While monocytes were significantly downregulated at time of admission in patients who required ventilation, we observed a consistent increase from time of intubation to time of discharge with the highest incline occurring right after time of extubation ( Figure 6D). The opposite directionality was observed for neutrophils ( Figure 6D). Interestingly, CD4 Tregs, which are known to play a role in ARDS resolution and pulmonary recovery, demonstrate a gradual increase in frequency during patient intubation followed by the steepest increase after extubation (Mock et al. 2014;Garibaldi et al. 2013) ( Figure S6E). Additionally, the phenotype of monocytes in patients who never require ventilation resembles the activated monocyte subset identified during discharge and ventilation recovery, expressing significantly higher levels of CD4 and CD11c ( Figure S6F and S6G). Furthermore, basophil and CD4 Treg signaling states that were identified during ventilation resolution were already significantly higher in patients who required ventilation at time of admission ( Figure 6E and S6H) and consistently decreased during ventilation ( Figure 6F). Taken together, our results show a set of conserved core immune features that accompany disease resolution with additional features that identify patients who recover from ventilation ( Figure 6G). These ventilation specific features are significantly different at time of admission between patients who will require mechanical ventilation and those that never require ventilation, and thus associated with poorer clinical outcomes ( Figure S6I). Discussion: Human immunology studies are inherently challenging because of the variability in baseline immune cell compositions, heterogeneity in immune responses, and difficulty in collecting longitudinal samples to track individuals over time. Because of the urgency to understand and respond to COVID-19, this cohort of patients provided a unique opportunity to recruit, study, and analyze a large number of individuals responding to the same infection over a finite period of time (April 2020 -April 2021). Since individuals recover from their infection across a variable amount of time, these studies highlight the benefit of longitudinal 8 . CC-BY-ND 4.0 International license made available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is The copyright holder for this preprint this version posted March 16, 2022. analysis anchored on key clinical events in the disease process. This analytical approach revealed the unifying trends among patients that define clinically relevant events such as discharge from the hospital or extubation after mechanical ventilation, regardless of initial disease severity or time to recovery. Our findings are consistent with several recent reports of immune responses to COVID-19 while contributing a new understanding of the immunological processes that accompany disease recovery, including changes in immune cell signaling states. Although some studies have suggested that early intervention to modulate immune hyperactivation may be beneficial in severe COVID-19 (Lucas et al. 2020), these data indicate that early immune cell signaling, particularly pSTAT3 and pSTAT6, correlates with shorter hospitalization and ventilation duration. This indicates that an early robust immune response, driven by pSTAT signaling, and subsequent contraction during recovery may be beneficial to resolve COVID-19. In patients who require mechanical ventilation, additional immunological processes involving increased Tregs and basophils also accompany recovery in addition to the core recovery trajectory observed in patients who did not require ventilation. Additionally in our analysis, the STAT1 pathway downstream of type I IFN signaling was not differentially activated between patients with different disease severities. Instead, our study identified that many signaling pathways are activated simultaneously at the time of hospitalization, consistent with a recent report of concordant production of cytokines associated with type 1, 2, and 3 immune responses in patients with severe COVID-19 (Lucas et al. 2020). Despite the importance of B cells to generate SARS-CoV-2 neutralizing antibodies (Lucas et al. 2021), interestingly, our work did not identify changes in circulating B cells associated with the recovery trajectory. This finding aligns with the clinical observation that B cell deficient patients or patients with agammaglobulinemia can recover from COVID-19 (Bange et al. 2021;Soresina et al. 2020), and suggests that B cells may play a role in contributing to immunological memory as compared to the resolution of severe COVID19. Our work identified regulatory T cells as significantly changing only in patients who require ventilation, starting at significantly lower frequencies than in patients who never require ventilation support but gradually progressing to a steep increase after extubation. These findings are consistent with their critical role in pulmonary repair and ARDs recovery and specifically identify them as mediators in recovery from severe COVID-19 (Mock et al. 2014;Garibaldi et al. 2013). Overall, our study provides an understanding of the core immunological changes that accompany disease recovery from severe COVID-19 and provides a foundational model of a successful anti-SARS-CoV-2 immune response. This working model of a recovering immune response trajectory provides a benchmark to contextualize divergent immune processes during poor disease outcomes in immunosuppressed or immunocompromised patients, long-haul COVID-19 patients, pediatric patients with MIS-C, or response to new variants. By elucidating a conserved trajectory of successful recovery, this study also nominates key immunological processes that could be targeted to enable recovery of severe disease in COVID-19 patients and perhaps other acute respiratory infections. Acknowledgments: This work was supported by generous grants from The Carlsberg Foundation (T.L.H.O.) and a COVID-19 Fast Grant (M.H.S.). C.E.B was supported by fellowships from the NCI (1F31CA260938-01), NSF Graduate Research Fellowship Program (GRFP), and the UCSF Discovery Fellowship. We would like to thank the NIAID Immunophenotyping Assessment in a COVID-19 Cohort (IMPACC) Network and the National Institutes of Health for their support (3U19AI077439-13S1 and 3U19AI077439-13S2). P.M.S was supported by the Howard Hughes Medical Institute through the James H. Gilliam Fellowships for Advanced Study program. C.S.C. was supported by NHLBI (R35 HL140026). C.R.L. has received funding 9 . CC-BY-ND 4.0 International license made available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is The copyright holder for this preprint this version posted March 16, 2022. ; from NHLBI and NIAID and C.M.H was supported by an NHLBI K23 and a DOD grant. Additionally, this project has been made possible in part by grant numbers 2019-202665 from the Chan Zuckerberg Foundation and TSK-020586 from Genentech. We acknowledge the Parnassus Flow Cytometry CoLab Facility supported in part by NIH Grants P30DK063720, S10OD018040, S10OD018040, and S10OD021822. M.H.S. is a Chan Zuckerberg Biohub investigator and a Parker Institute for Cancer Immunotherapy investigator. Declaration of interests M.H.S. is a board member and equity holder in Teiko.bio and has received research support from Roche/Genentech, Bristol Myers Squibb, Pfizer, and Valitor. C.S.C. has received funding from NHLBI, FDA, DOD, Genentech and Quantum Leap Healthcare Collaborative, and are on consulting/advisory boards for Vasomune, Gen1e Life Sciences, Janssen, and Cellenkos. C.M.H has been consulting for Spring Discovery. P.G.W has a contract from Genentech to study COVID-19. Human subjects Patients, or a designated surrogate, provided informed consent to participate in the study. The study is approved by the UCSF Institutional Review Board: IRB 20-30497. Clinical study design and patient cohort Clinical study was designed and implemented according to the IMPACC study ). Patients were recruited from UCSF hospital system and Zuckerberg San Francisco General Hospital and they, or a designated surrogate, provided informed consent to participate in the study. Patients with presumed COVID-19 were enrolled within three days of hospital admission and peripheral blood samples were collected under a protocol approved by the UCSF Institutional Review Board (IRB 20-30497). Patients with confirmed positive SARS-CoV-2 polymerase chain reaction (PCR) were designated as COVID-19 positive cohort (n = 81) and patients without confirmed SARS-CoV-2 PCR were designated COVID-19 negative (n = 7). Healthy donors (n = 11) were recruited (IRB 19-27147) for a single peripheral blood time point and consisted of unexposed patients in a similar age range as the hospitalized cohort. Clinical data and peripheral blood samples were collected at time of enrollment and throughout hospitalization (mainly on days 4, 7, 14, 21, and 28). If escalation of care was required, samples were collected within 24 and 96 hours of care escalation. COVID-19 Clinical severity classification All COVID-19 patients in this study were admitted into the UCSF hospital system and remained there for the duration of our study. By definition, all in-patients reflect a World Health Organization (WHO) COVID-19 severity score of 3 or greater. Patient severity was determined by the clinical team to reflect the WHO COVID-19 severity scoring at each clinical time point throughout in-patient treatment. Based on WHO stratifications (World Health Organization 2021a) and consulting with the treating physician teams, our study combined WHO score 5, 6, and 7 into the most severe clinical group. WHO scores of 3 and 4 correspond to Mild and Moderate groups, respectively. 11 . CC-BY-ND 4.0 International license made available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is The copyright holder for this preprint this version posted March 16, 2022. ; Blood samples were collected in one EDTA tube and processed within 6 hours of collection. Whole blood was divided in 540 µL aliquotes then fixed by addition of 756 µL of SmartTube Stabilizer from SmartTube Inc (Fisher Sci. Cat# 501351692). After gentle mixing at room temperature for 10 mins, the samples were transferred to labeled cryovials and immediately carried to −80°C for long term storage. Sample Thawing and filtering Samples were subsequently thawed after being placed 10 min into a 4˚C refrigerator then incubated for 15 min in a room temperature water bath. After filtering with 70µm Cell Strainer (Celltreat, Cat# 229483) and washing in 45 ml Milli-Q H2O, samples were counted and barcoded. Antibodies and staining procedure The source for all mass cytometry antibodies can be found in Supplementary Table 1. Antibodies were conjugated to their associated metals with MaxPar X8 labeling reagent kits (Fluidigm) according to manufacturer instructions, diluted with Candor PBS Antibody Stabilization solution (Candor Bioscience, CAT#130 050) supplemented with 0.02% sodium azide, and filtered through an UltrafreeMC 0.1-mm centrifugation filter (Millipore) before storage at 4º C. To reduce tube-to-tube pipetting variations, part of the signaling antibody panel came from lyophilized antibody cocktail, made at Stanford University as previously described ((Han et al. 2018)). Surface and intracellular master antibody cocktails were made and kept at -80º C in order to stain up to 600 samples. Mass-tag cellular barcoding Prior to antibody staining, mass tag cellular barcoding of prepared samples was performed by incubating cells with distinct combinations of isotopically-purified palladium ions chelated by isothiocyanobenzyl-EDTA as previously described ((Zunder et al. 2015)). After counting, 1*10 6 cells from aliquot were barcoded with distinct combinations of stable Pd isotopes for 15 min at room temperature on a shaker in Maxpar Barcode Perm Buffer (Fluidigm, cat#201057). Cells were washed twice with cell staining media (PBS with 0.5% BSA and 0.02% NaN3), and pooled into a single 15 ml tube. Mass cytometry staining Barcoded cells were stained with Fc Receptor Blocking Solution (BioLegend, Cat#422302) at 20 mg/ml for 5 min at RT on a shaker. Surface antibody cocktail is then added with a 500 ul final reaction volume for 30 min at RT on a shaker. Following staining, cells were washed twice with cell staining media. Before intracellular staining, cells were permeabilized for 10 min with methanol at 4ºC. Methanol is then removed by washing the cells 2 times with cells staining media. Intracellular cocktail is then added to the cells in 500 uL final reaction volume for 1 hour at RT on a shaker. Cells were washed twice in cell staining media to remove antibodies excess and then stained with 1mL of 1:4000 191/193Ir Iridium intercalator solution (Fluidigm,Cat#201192B ) diluted in PBS with 4% PFA overnight. Before mass cytometry run, cells were washed once with cell staining media, and twice with Cell Acquisition Solution (Fluidigm, Cat# 201240). 12 . CC-BY-ND 4.0 International license made available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is The copyright holder for this preprint this version posted March 16, 2022. ; Mass cytometry samples were diluted in Cell Acquisition Solution containing bead standards (Fluidigm, Cat#201078) to approximately 10 6 cells/mL and then analyzed on a Helios mass cytometer (Fluidigm) equilibrated with Cell Acquisition Solution. Approximately 0.5x106 cell events were collected for each sample at an even rate of 400-500 events/second. Data normalization and de-barcoding Bead standard data normalization and de-barcoding of the pooled samples into their respective conditions was performed using the R package from the PICI institute available at https://github.com/ParkerICI/premessa. Quality control inclusion and exclusion criteria In order to ensure high quality sample collection, processing, and staining across the cohort we developed a set of inclusion criteria required for each sample to be used in our data analysis. We processed and ran CyTOF on 498 peripheral blood samples. After debarcoding and normalization, samples were uploaded to Cell Engine to assess adequate staining and cell number. Each barcode plate was run with a healthy PB control sample aliquoted from two healthy donors to validate staining and for normalization between barcode plates. If the control PB sample failed to stain the major immune cell populations (T cell, B cell, granulocytes, monocytes), no samples from that barcode plate were included. Individual samples were then assessed for CD45+ composition (>50% CD45+ staining required), cell abundance (>5,000 cells per sample required), and representation of the major immune cell populations (T cell, B cell, granulocytes, monocytes). 230 samples passed QC and were used in the batch normalization. Manual gating Batch effect normalized FCS files were uploaded to Cell Engine for manual gating. Major immune cell populations were identified based on prior gating strategy (Allen et al. 2020). T cell subsets were further identified based on phenotypic markers specified in prior publication that suggested these specific subtypes could play a role in COVID-19 severity (Mathew et al. 2020). 13 . CC-BY-ND 4.0 International license made available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is The copyright holder for this preprint this version posted March 16, 2022. ; The multiparameter dimensionality reduction method t-distributed stochastic neighbor embedding (t-SNE) was employed to visualize major shifts in immune distribution between COVID-19 positive, COVID-19 negative, and healthy individuals. CD45+ immune cells from healthy peripheral blood samples were compared to day 0 (D0) peripheral blood samples from COVID-9 positive and negative individuals and respective groups were concatenated into a single FSC file which was then used in the t-SNE algorithm on Cell Engine (cellengine.com). Only phenotypic markers were used as analysis channels and no phospho-signaling channels were input into the t-SNE visualization. The default settings for t-SNE plot were utilized and a default of 90 nearest neighbors (k) was used. Manually gated immune cell populations were used to color the t-SNE plot to identify representative immune populations on the plot. Defining groups and samples For intra-patient resolution analyses, we defined three different groups; patients who were discharged within 30 days of enrollment in the study (<30 days), patients who were discharged after 30 days of enrollment in the study (>30 days), and patients who died. For patients who were discharged <30 days, the last sample (tp2) had to be obtained within 7 days of discharge. For patients who were discharged >30 days and patients who died, the last sample (tp2) had to be obtained within 50 days of discharge. For all groups, the first sample (tp1) had to be obtained within 14 days of enrollment. For intra-patient ventilation recovery analysis, samples had to be obtained within 7 days of the point of interest, e.g. going on a ventilator / coming off a ventilator. For all comparisons; if multiple samples fulfilled the requirements, we used the sample closest to the event of interest. The number of patients and specific sampling timepoints used for each analysis are illustrated in the supplementary figures. Statistical analysis All statistical tests were performed in R (Team and Others 2013;RStudio Team 2016). The non-parametric Wilcoxon rank sum test was utilized to compare immune population frequencies, median protein expression values, and median signaling molecule values between groups of interest. For intra-patient analysis, we used the paired Wilcoxon rank sum test. For multiple testing corrections, we applied Benjamini-Hochberg correction and statistical differences were declared significant at FDR < 0.1. When multiple testing was not applied, statistical differences were declared significant at P < 0.05. Most of the plots were produced with the R package ggplot2 (Wickham 2016). References: Allen (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is The copyright holder for this preprint this version posted March 16, 2022. ;https://doi.org/10.1101https://doi.org/10. /2022 cell subsets in 3B for tp1, tp2, and healthy controls. Immune cell directionality and contribution to PCA space denoted on right (top). Summary ellipsoid of tp1, tp2, and healthy patients in PCA space on right (bottom). E) Population frequencies of significant immune cell subsets in 3B for tp1, tp2, and healthy controls. Stars indicate median value for each group. Cell populations are highlighted in green if tp2 is closer to healthy than tp1, and highlighted in yellow if tp2 is moving away from healthy. F+G) Protein expression on CD8-and CD4 activated T cells (F) and on monocyte subsets (G) at tp1 and tp2. Mean protein expression values have been log10 transformed, scaled, and centered on heatmap. Bars indicate mean protein expression across all samples. Only significant proteins are shown (Wilcoxon Rank Sum Test, Benjamini-Hochberg correction with FDR < 0.1). H) Scatter plots of CD11c and HLA-DR expression on non-classical monocytes in patient 1344 at D0 (top) and D7 (bottom). I) Expression of signaling molecules in significant immune cell subsets in 3B at tp1 and tp2. Median signaling expression values have been centered on heatmap. Only significant signaling molecules are shown (Wilcoxon Rank Sum Test, Benjamini-Hochberg correction with FDR < 0.1). J) Expression of pTBK1 in CD8 activated T cells, and pSTAT3 expression in CD8 activated T cells and classical monocytes at tp1 and tp2. Lines connect samples from the same patient. P-values obtained by paired Wilcoxon Rank Sum Test. K) Expression of PDL1 on non-classical monocytes at tp1 and tp2. Lines connect samples from the same patient. P-values obtained by paired Wilcoxon Rank Sum Test. Figure 4: Immune features associated with COVID-19 resolution are absent in patients who are discharged late or die from COVID-19 A) Illustration of intra-patient analysis of patients who are hospitalized for >30 days (n = 6) and patients who die (n = 5). B) Median cell population frequencies at tp1 (red) and tp2 (blue) for patients who are discharged <30 days, >30 days, and deceased. C) Representative scatter plots of activated CD8 T cells (defined by CD38 and HLA-DR expression), at tp1 (left) and tp2 (right) for patients who are discharged <30 days, >30 days, and deceased. D) Magnitude of change illustrated by log2FC*-log10(pvalue) of signaling molecules (identified in Figure 3I) for patients who are discharged within 30 days (<30 days, green), discharged after 30 days (>30 days, blue), and die (red). P-values obtained by paired Wilcoxon Rank Sum Test. E+F) Median signaling molecule expression at tp1 (red) and tp2 (blue) for patients who are discharged <30 days, >30 days, and deceased. G) Monocyte frequencies (left plots) and CD8 activated pERK expressions (right plots) relative to time to discharge in all samples from patients who are discharged <30 days (n = 142 samples) or >30 days (n = 30 samples). Black lines connect samples from the same patient. Blue lines and grey shadows represent the best fitted smooth line and 95% confidence interval. Dotted lines intersect the x-axis at day 30. Figure 5: Recovery from severe COVID-19 requires core immune resolution features and additional regulatory T cell and basophil upregulation A) Illustration of intra-patient analysis of ventilated patients. Three timepoints are considered: tp1 (first sample after a patient has been put on a ventilator), tp2 (last sample before the patient is removed from a ventilator), and tp3 (first sample after a patient is successfully removed from ventilation support). B) Paired differential expression analysis of immune cell populations between the first (tp1) and third (tp3) timepoints illustrated in 5A (paired Wilcoxon Rank Sum Test). The log2 fold changes (tp3 vs tp1) are plotted against the negative log10(p-values). Colors indicate if cell populations are significantly down-(blue) or upregulated (purple) from tp1 to tp3 or not differentially expressed (FALSE, grey) after Benjamini-Hochberg correction, FDR < 0.1. C) Frequency of monocytes, neutrophils, CD4 Treg, and CD8 activated T cells at tp1 and tp3. Lines connect samples from the same patient. P-values obtained by paired Wilcoxon Rank Sum Test. CD8 activated T cells are shown as a percentage of parent population (e.g. CD8 T cells), while monocytes, neutrophils, and CD4 Tregs are shown as a percentage of all cells. D) Principal component analysis of significant immune cell subsets in 5B for tp1, tp3, and healthy controls. Immune . CC-BY-ND 4.0 International license made available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is The copyright holder for this preprint this version posted March 16, 2022. ; https://doi.org/10.1101/2022.03.15.484467 doi: bioRxiv preprint cell directionality and contribution to PCA space denoted on the right. E) Population frequencies of significant immune cell subsets in 3B for tp1, tp3, and healthy controls. Stars indicate median value for each group. Cell populations are highlighted in green if tp3 is closer to healthy than tp1, and highlighted in yellow if tp3 is moving away from healthy. F+G) Protein expression on CD8 activated T cells (F) and on monocyte subsets (G) at tp1 and tp3. Mean protein expression values have been log10 transformed, scaled, and centered on heatmap. Bars indicate mean protein expression across all samples. Only significant proteins are shown (Wilcoxon Rank Sum Test, Benjamini-Hochberg correction with FDR < 0.1). H) Expression of PDL1 on non-classical monocytes at tp1 and tp3. Lines connect samples from the same patient. P-values obtained by paired Wilcoxon Rank Sum Test. I) Left: Scatter plots of CD11c and HLA-DR expression on non-classical monocytes in patient 1276 at D0 (tp1, top) and D28 (tp3, bottom). Right: Expression of CD64 on non-classical monocytes for patient 1279 from D0 (tp1) to D28 (tp3). J) Expression of signaling molecules in significant immune cell subsets in 5B at tp1 and tp3. Median signaling expression values have been centered on heatmap. Only significant signaling molecules are shown (Wilcoxon Rank Sum Test, Benjamini-Hochberg correction with FDR < 0.1). K) Expression of pSTAT1 (left) and pCREB (right) in CD4 Tregs at tp1 (blue) and tp3 (orange) for representative patients. L) Expression of pSTAT1 and pCREB in CD4 Tregs at tp1 and tp3. Lines connect samples from the same patient. P-values obtained by paired Wilcoxon Rank Sum Test. M) Principal component analysis of significant signaling molecules in 5I for tp1, tp3, and healthy controls. Immune cell directionality and contribution to PCA space denoted on the right. . CC-BY-ND 4.0 International license made available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is The copyright holder for this preprint this version posted March 16, 2022. ;https://doi.org/10.1101https://doi.org/10. /2022 Supplemental . CC-BY-ND 4.0 International license made available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is Supplemental Figure 3 A) Samples used for intra-patient analysis in Figure 3 of patients that are discharged from the hospital within 30 days of admission (n = 32). Points indicate sample timepoint and are coloured according to WHO score. Green points indicate the day of discharge. B) Paired differential expression analysis of protein expression on monocyte subsets, neutrophil, CD8-and CD4 activated T cells between the first (tp1) and second (tp2) I. Median expression Median expression Supplemental Figure 5: A) All samples for patients that have been put on a ventilator. Dark blue points indicate when a patient is put on a ventilator. Light blue points indicate when a patient is taken off a ventilator. B) Samples used for intra-patient analysis between tp1 and tp3 in Figure 5 of patients that have been put on a ventilator (n = 9). Points indicate sample timepoint and are coloured according to WHO score. Dark blue points indicate when a patient is put on a ventilator. Light blue points indicate when a patient is taken off a ventilator. C) Frequencies of CD4 Tregs and Basophils at tp1, tp3, and in healthy controls. P-values obtained by Wilcoxon Rank Sum Test. D) Samples used for intra-patient analysis between tp1 and tp2 in Figure 5 for patients that have been put on a ventilator (n = 11). Points indicate sample timepoint and are coloured according to WHO score. Dark blue points indicate when a patient is put on a ventilator. Light blue points indicate when a patient is taken off a ventilator. E) Paired differential expression analysis of immune cell populations between the first (tp1) and second (tp2) timepoints illustrated in 5A (paired Wilcoxon Rank Sum Test). The log2 fold changes (tp2 vs tp1) are plotted against the negative log10(p-values). Colors indicate if cell populations are significantly down-(blue) or upregulated (purple) from tp1 to tp2 or not differentially expressed (FALSE, grey) after Benjamini-Hochberg correction, FDR < 0.1. F) Paired differential expression analysis of protein expression on neutrophils between the first (tp1) and third (tp3) Figure 6. For ventilated patients (n = 13), the latest sample before the patient is put on a ventilator or, if available, the sample at the day of ventilation is used. For non-ventilated patients (n = 50), D0 is used. B) Samples obtained prior to ventilation (n = 8). C) Differential expression analysis of immune cell populations between ventilated (from S6B) and non-ventilated patients (from S6A) (Wilcoxon Rank Sum Test). The log2 fold changes (vent vs no vent) are plotted against the negative log10(p-values). Colors indicate if cell populations are significantly down-(blue) or upregulated (purple) for vent vs no vent or not differentially expressed (FALSE, grey) after Benjamini-Hochberg correction, FDR < 0.1. D) Population frequencies of significant immune cell subsets in 6B for ventilated-, non-ventilated patients, and healthy controls. Stars indicate median value for each group. Cell populations are highlighted in green if non-ventilated patients are closer to healthy controls than ventilated patients. E) CD4 Treg frequencies relative to intubation / extubation in all samples from ventilated patients. Black lines connect samples from the same patient. Blue lines and grey shadows represent the best fitted smooth line and 95% confidence interval. Dotted lines intersect the x-axis at day of intubation / extubation. I . CC-BY-ND 4.0 International license made available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is The copyright holder for this preprint this version posted March 16, 2022. ;https://doi.org/10.1101https://doi.org/10. /2022
Coordinated multi-wavelength observations of Sgr A* We report on recent near-infrared (NIR) and X-ray observations of Sagittarius A* (Sgr A*), the electromagnetic manifestation of the ∼4x106M⊙ super-massive black hole (SMBH) at the Galactic Center. The goal of these coordinated multi-wavelength observations is to investigate the variable emission from Sgr A* in order to obtain a better understanding of the underlying physical processes in the accretion flow/outflow. The observations have been carried out using the NACO adaptive optics (AO) instrument at the European Southern Observatory's Very Large Telescope (July 2005, May 2007) and the ACIS-I instrument aboard the Chandra X-ray Observatory (July 2005). We report on a polarized NIR flare synchronous to a 8x1033 erg/s X-ray flare in July 2005, and a further flare in May 2007 that shows the highest sub-flare to flare contrast observed until now. The observations can be interpreted in the framework of a model involving a temporary disk with a short jet. In the disk component flux density variations can be explained due to hot spots on relativistic orbits around the central SMBH. The variations of the sub-structures of the May 2007 flare are interpreted as a variation of the hot spot structure due to differential rotation within the disk. Introduction The investigation of the dynamics of stars has provided compelling evidence for the existence of a super massive black hole (SMBH) at the center of the Milky Way. At a distance of only ∼8 kpc a SMBH of mass ∼4×10 6 M can convincingly be identified with the compact radio, infrared, and X-ray source Sagittarius A* (Sgr A*; Eckart & Genzel 1996, Ghez et al. 1998, 2004ab, 2005, 2003, Eisenhauer et al. 2003. Additional strong evidence for a SMBH at the position of Sgr A* came from the observation of flare activity on hourly time scales both in the X-ray and NIR wavelength domain (Baganoff et al., 2001;Genzel et al., 2003;Ghez et al., 2004). Due to its proximity Sgr A* provides us with a unique opportunity to understand the physics and possibly the evolution of SMBHs at the nuclei of galaxies. Sgr A* is remarkably faint (≤ 10 −9 of the Eddington rate) in all wavebands. Its surprisingly low luminosity has motivated many theoretical and observational efforts in the past decade to explain the processes that are at work in the immediate vicinity of Sgr A*. By now, it is generally accepted that its feeble emission is due to a combination of a low accretion rate with a low radiation efficiency. An intense discussion among the theoretical community at present focuses on radiatively inefficient accretion flow and jet models. For a recent summary of accretion models and variable accretion of stellar winds onto Sgr A* see Yuan (2006), Cuadra & Nayakshin (2006). The first successful simultaneous NIR/X-ray campaigns combined NACO and Chandra as well as mostly quasi-simultaneous mm-data from BIMA, SMA, and VLA (Eckart et al. 2004. The NIR/X-ray variability is probably also linked to the variability at radio through sub-millimeter wavelengths showing that variations occur on time scales from hours to years (Bower et al. 2002, Zhao et al. 2003 The temporal correlation between rapid variability of the near-infrared (NIR) and X-ray emission suggests that the emission arises from a compact source within a few ten Schwarzschild radii of the SMBH (Eckart et al. 2004. In this work, we assume for Sgr A* R s =2R g =2GM/c 2 ∼8 µas, with R s being one Schwarzschild radius and R g the gravitational radius of the SMBH. For several simultaneous flare events the authors found no time lag larger than an upper limit of ≤10 minutes, mainly constrained by the required binning width of the X-ray data. The flaring state can be explained with a synchrotron self-Compton (SSC) model involving up-scattered sub-millimeter photons from a compact source component. Inverse Compton scattering of the THz-peaked flare spectrum by the relativistic electrons then accounts for the X-ray emission. This model allows for NIR flux density contributions from both the synchrotron and SSC mechanisms. Observations for red and variable NIR flare spectra (Eisenhauer et al. 2005, Hornstein et al. 2007, Gillessen et al. 2006) are indicative of a possible exponential cutoff of the NIR/MIR synchrotron spectrum (Eckart et al. 2004). There is also evidence for a modulation of the NIR emission that may be due to hot spots orbiting Sgr A* in the accretion flow , 2008, Meyer et al.2006ab, 2007). The NIR flare emission is polarized with a well limited range over which the position angle of the polarized emission is changing (60 o ±20 o east of north) , Meyer et al. 2006ab, 2007. All these observations can be explained within a model of a temporary accretion disk that occasionally contains one or several bright orbiting hot spot(s), possibly in conjunction with a short jet, and suggest a stable orientation of the source geometry over the past few years. The millimeter/submillimeter wavelength polarization of Sgr A* is variable in both magnitude and position angle on timescales down to a few hours. Marrone et al. (2007) present simultaneous observations made with the Submillimeter Array polarimeter at 230 and 350 GHz with sufficient sensitivity to determine the polarization and rotation measure at each band. From their measurements they deduce an accretion rate that does not vary by more than 25% and -depending on the equipartition constraints and the magnetic field configuration -amounts to 2×10 −5 to 2×10 −7 M yr −1 . The mean intrinsic position angle is 167 • ±7 • with variations of ∼31 • that must originate in the sub-millimeter photosphere of SgrA*. Here, we present data and modeling for three events: polarimetric NIR observations of a very bright flare from May 2007, and Chandra X-ray measurements from 2005 and 2004 that were taken in parallel with NIR photometric and polarimetric measurements of flares reported by Eckart et al. (2006ab). In Section 2 we summarize the observations and the data reduction. The observational results and modeling of the data are presented in Section 3 and a more general discussion of available infrared and X-ray variability data on Sgr A* is given in Section 4. In section 5 we briefy discuss the interaction of the GC ISM with a potential wind/partly collimated outflow that originates in the vicinity of Sgr A*. In section 6 we summarize our findings and draw some conclusions. The Chandra X-ray data fully cover the observed polarized NIR flare that we observed at the VLT in July 2005. The X-ray data show a 8×10 33 erg/s flare that is about 3 times as bright as the quiescent emission from SgrA*. In the right panels of Fig. 1 we show corresponding X-ray and NIR lightcurves using a 207 second bin size. The cross-correlation of the X-ray data with the flux densities in the individual NIR polarization channels shows that the flare event observed in the two wavelength bands is simultaneous to within less than 10 minutes. The two sub-peaks in the cross-correlation function correspond to two apparent sub-peaks in the X-ray light curve that can, however, not be taken as significant given the SNR of ∼3 cts/s per integration bin In the X-ray domain there is no clear indication for a sub-flare structure as observed in the NIR. The NIR sub-flare contrast defined as the sub-flare height divided by the height of the overall underlying flare flux density ranges between 0.3 and 0.9. Basic building blocks of relativistic disk modeling of the flares We interpret our polarized infrared flare events via the emission of spots on relativistic orbits around the central SMBH in a temporary disk , Meyer et al. 2006ab, 2007. The model calculations are based on the KY-code by Dovciak, Karas, & Yaqoob (2004) and are usually done for a single spot orbiting close to the corresponding last stable orbit. The possibility to explore effects of strong gravity via time-resolved polarimetrical observations of X-rays (which also inspired writing the KY code) was originally proposed by Connors & Stark (1977). The amplification light curves for individual hot spots that can be computed with the KY code are used as the basic building blocks of our models, because even a complicated (non-axisymetric) pattern on the disk surface can be represented as a suitable combination of emitting spots. At this point we just remind the reader that relativistic effects actually do not produce polarization by themselves, rather they can change the polarization angle and the overall polarization degree of an intrinsically polarized signal because each photon experiences a different gravitational effect along its path from the point of emission to the observer. In case of a single spot as a source of the emission, the observed polarization vector is expected to wobble or rotate as a function of the spot phase. This is a purely geometrical effect connected with the presence of strong gravitational field. Naturally, the intrinsic changes of the spot polarization are superposed on top of this relativistic effect. . As a result from model calculations, we show for two cases representative flux density distributions and NIR/X-ray light model curves with noise. Different distribution of spots in the disk were assumed in both cases. The flux density distributions are shown along the last stable orbit perimeter of the super massive black hole associated with Sgr A* (upper panels). Here, no truncation at or just within the last stable orbit has been applied. The contour lines are at 12, 25, 50, and 75% of the peak of the flux density distribution. The NIR and X-ray light curves shown in the lower panels are representative for the median values calculated in . For the X-ray data we added noise comparable to the noise contributions obtained in the observations. For the NIR we added 0.4 mJy of random Gaussian noise. The bin size of the model data corresponds to 207 s. The period for one revolution of the hot spots around Sgr A* has been set to 14 min. The position of Sgr A* is indicated by a white cross. A multi component disk model The observed NIR/X-ray properties of the SgrA* light curves raise a number of questions: Can we expect a sub-flare structure in the X-ray domain using a synchrotron self-Compton model? What is the approximate flux distribution within a temporary accretion disk around Sgr A*? This is also closely related to more general questions of how the observed light curve properties vary if the life time of the spot, shearing, and synchrotron cooling time scales are considered. In the following we describe an extended SSC model that includes a disk structure that is composed of a combination of hot spots of different brightness and with different initial orbital locations The SSC disk model In order to explain the time dependent flare properties we assume that the sub-flare and disk component can be described by a number of individual synchrotron and SSC emitting source components. Combining the light amplification curve for individual orbiting spots and a simple SSC model, we can obtain zero order time dependent flare characteristics from the NIR to the X-ray domain. As a starting point we used synchrotron models that represent a high flux density, i.e. flaring, and a low flux density state. Greenhough present a scaling law between magnetic stress in units of the gas pressure and the vertical disk cell size in units of the pressure scale height implying that the magnetic field and source component size follow a power law relation. Therefore we assume that the essential quantities of the SSC models, i.e. the turnover flux density S m and frequency ν m as well as the source size θ of the individual source components are distributed as power laws with the boundary values taken from the high and low flux density state models. The corresponding power-law indices of the distribution are α S , α ν and α θ : For example, if α S = 0 the flux densities of the source components are distribted over the full range between the minimum and maximum values with equal frequencies. For α S > 0 and α S < 0 there is an increasing preference towards larger and lower flux density values, respectively. Similarly this is true for α ν and α θ . The innermost stable circular orbit (ISCO) around a non-rotating black hole with spin parameter a=0 is 6R g . Assuming the co-rotating case, that radius will shrink for higher spin parameters. For a rotating black hole with a=0.5 the ISCO is ∼4.4R g . Model calculations have shown (Meyer et al. 2006ab) that for Sgr A* spin parameters a ≥ 0.5 and source components orbiting at radii larger than the ISCO are very likely It was furthermore shown that the disk is small with an outer disk radius extending not much further than 2R s beyond the ISCO. With (Meyer et al. 2006ab) we can safely assume that the disk is well sampled using a total of 10 Gaussian shaped disk sections with random values of S m , ν m taken from the described power law distributions in order to model the entire accretion disk (see for more details). The brightest of these sections will then represent the orbiting spot and the rest will account for the underlying disk. This setup will, of course, also allow for several bright spots. As a simple -but still general -model we assumed the source components to be equally spaced along the circumference of a constant orbit. While orbiting, the flux density of each component will follow the achromatic magnification curves that can be calculated as a function of the spin parameter a, inclination i and orbital radius with the KY code as described in section 3.1. In order to model the limited lifetimes of the spots we apply a Gaussian shaped weighting function with a FWHM of about 3 orbital periods, which resembles the observed flare lengths quite well. The combination of these weighting functions cover the overall flare. The model implies that the life time of spots is short with respect to the orbiting time scale and the overall flare duration and that new hot spots have to be created. It is not yet clear what kind of process would be the best one to describe the creation and subsequent extinction of the spots. The assumption that spots arise as being statistically independent on each other is a reasonable first approximation, however, it is quite likely that some kind of relationship between the spots will have to be included, for example within the framework of the avalanche mechanism (Pechacek et al. 2008). Results of the Modeling An important result of the simulations is that not only the observed total NIR and X-ray flux densities can be successfully modeled but also the observed sub-flare contrast. In addition, the best fits to the NIR and X-ray flux densities lie within or close to regions of high NIR flux density weighted magnetic field strength. Under the assumption that the NIR polarization measurements are being used the NIR flux density weighted magnetic field strength then represents the magnetic field value that is most likely to be measured. This demonstrates that the combination of the SSC modeling and the idea of a temporary accretion disk can realistically describe the observed NIR polarized flares that occur synchronous with the 2-8 keV X-ray flares. We find that the power-law index α S of the assumed power law distribution for the synchrotron peak flux S m results in best model results for values around α S =-1±1. A value of α S =0 (which is included in this range) represents secnarios in which source components cover the entire range of possible flux densities with an equal probability for each value rather than being biased in a way that the components have a large probability of having similar brightness. Values of α S ≈0 provide high sub-flare contrast values. An exponent of α S =-1 favors a higher frequency of lower flux density values. In the SSC model high contrast is provided by the SSC contribution to the NIR spectral range, also allowing for χ 2 -fits at lower flux density weighted magnetic field strengths around 30 G rather than 60 G as for the synchrotron model. Magnetic field strengths between 5 and 70 Gauss , Yusef-Zadeh et al. 2008) are consistent with sub-mm/mm variability timescales of synchrotron components with THz peaked spectra and the assumption that these source components have an upper frequency cutoff ν 2 in the NIR, i.e. that they contribute significantly to the observed NIR flare flux density. Here the upper frequency cutoff to the synchrotron spectrum is assumed to be at ν 2 = 2.8 × 10 6 Bγ 2 in Hertz, with the magnetic field strength in Gauss. The Lorentz factor γ 2 corresponds to the energy γ 2 mc 2 at the upper edge of the electron power spectrum. For γ 2 ∼10 3 and B around 60 G, the synchrotron cutoff falls into the NIR. In order to match the overall typical flare timescale of about 2 hours and given a minimum turnover frequency around 300 GHz the minimum required magnetic field strength is of the order of a 5 Gauss. This is required as a minimum value to have the cooling time of the overall flare less than the duration of the flare (Yuan, Quataert, Narayan 2003, Quataert 2003 After adding appropriate noise (estimated from the available X-ray data) to the X-ray modeling results it becomes apparent that, at the given SNR and data sampling, short term variability in the X-ray data is difficult to determine, even if it has a modulation contrast similar to that observed in the NIR (see Fig. 3). Bright spots may on average have smaller sizes or lower cutoff frequencies. An increase of SSC X-ray flux density due to an increase of THz peak synchrotron flux may be compensated by this effect. Hence the sub-flare contrast may be much lower in the X-ray compared to the NIR domain. In Fig. 5 Motivated by the fact that the May 2007 data show evidence for hot spot evolution due to differential rotation within the relativistic disk, we assumed that an increase of the source size of the individual spots may be of importance during the flare events. Therefore, starting at the center of the July 2004 flare event, we assumed in both cases a 30% increase of the source component sizes over 30 Such a scenario may also explain the 2006 July 17 Keck NIR/X-ray light curves reported by Hornstein et al. (2007). The authors measured an NIR flare without a detectable X-ray counterpart. It was delayed by about 45 minutes from a significant X-ray flare, during which no NIR data were taken. Assuming that the X-ray flare was accompanied by an unobserved NIR flare as well, this event may have been very similar in structure to the July 2004 flare. Sub-flares and quasi-periodicity For Fig. 4 we calculated 2.2µm light curves under the assumption of decreasing times scales for the magneto-hydro-dynamical stability of the source components in the accretion disk around SgrA*. The thin vertical lines mark the centers of stability intervals with Gaussian shaped flux density weights. These Gaussian shaped weights cover a time interval over which source components can be considered as being stable. Between such time intervals new source components within the accretion disk are being formed. The marks are spaced by a FWHM of the individual Gaussians. This arrangement results in light curves similar to the observed ones (see for more details). We assumed that for each of these intervals the flux density distribution within the disk is different. This results in phase shifts between the light curves (i.e. different positions of the spot within the disk) of ±π. This simulation shows that the overall appearance (especially the mean QPO frequency) of the light curve can be preserved and that variations in the sub-flare amplitude and time separations can be explained by such a scenario. In Fig. 4 Here T S is the synchrotron cooling time in the NIR which is of the order of several minutes for the magnetic filed strengths used here (see . Assuming that the width of the observed QPO is 17±3 minutes we can derive an expected full width of the power spectrum peak of∆ ν∼0.02 min −1 . Following Schnittman et al. (2006) this corresponds to an expected lifetime of the spots of T lif ∼4 minutes -a value similar to the synchrotron cooling time in the NIR K-band. However, quasi-simultaneous K-and L-band measurements by Hornstein et al. (2007) show that for several 1 to 2 hour stretches of variable K-band emission ≥3 mJy, including flares of 10 to 30 minutes duration, the light curves at both wavelengths are well correlated. This suggests that the synchrotron cooling time scale in this case appears to be longer than the flare time scale and therefore not to be a relevant quantity for the spot lifetime. In addition, the spread∆ ν is an upper limit to the width of a possible Lorentzian distribution describing the QPO measurements. Therefore, we have to assume that T lif is even longer than the synchrotron cooling times at K-and L-band, i.e significantly longer than 13 minutes, and suggesting that the spot lifetime could be of the order of T orb , in agreement with results by Schnittman et al. (2005) and the model calculations presented here. The synchrotron cooling time scales may not be relevant at K-and L-band if the heating time scale is longer (e.g. on the time scale of the overall NIR or sub-mm flare event) or if some additional mechanism is at work that stabilizes the spots in the temporary accretion disk of SgrA*. A small spot size and a high magnetic field intrinsic to the spot may help to prevent strong shearing, lowering the requirements for this confinement mechanism. Alternative models to the black hole scenario Explaining SgrA* with alternative solutions for a MBH becomes increasingly difficult (see discussion in appendix of Eckart et al. 2006b). Stellar orbits near SgrA* make a universal Fermion ball solution for compact galactic nuclei highly unlikely and especially the fact that SgrA* appears to be a strongly variable and mass accreting object, represents a problem for the stability constraints that boson or fermion balls have. It is, for instance, quite a delicate process to form a boson star and preventing it from collapsing to a super massive black hole despite of further accretion of matter, a non spherically symmetric arrangement of forces as in the case of a jet or matter being in orbit around the center but well within the boson star. Such a massive boson star scenario could already be excluded for the nucleus of MCG-6-30-15 (Lu & Torres 2003). In the case of a stationary boson star the orbital velocity close to the ∼3 R S radius LSO is already ∼3 times lower than that of a Schwarzschild MBH (Lu & Torres 2003) and relativistic effects are severely diminished and further reduced at even smaller radii. If the variability that is indicated especially by the infrared polarization data is indeed due to orbital motion of a spot within a temporal accretion disk then a stationary boson star can be excluded as an alternative solution for SgrA* because in this case one expects the orbital periods to be larger. They show that the shape and the motion of the filaments does not agree with a purely Keplerian motion of the gas in the potential of the supermassive black hole at the position of SgrA*. Therefore, additional mechanisms must be responsible for their formation and motion. The authors argue that the properties of the filaments are probably related to an outflow from the disk of young mass-losing stars around SgrA*. In part, the outflow may originate from 2007) also derive the proper motions of 2 cometary shaped dusty sources close (in projection) to SgrA* (Fig.6). The V-shaped dust shells may indicate an interaction with a strong wind from the direction of Sgr A* (Fig. 6). The central cluster of massive stars provides ∼ 3×10 −3 M yr −1 of gas in the form of stellar winds to the center (Najarro et al. 1997). However, about 99% of the material from stellar winds does not even get close to the Bondi radius and must therefore escape the central arcseconds in form of a wind and only a small fraction of the gas is actually accreted onto the black hole The V-shapes of both sources are pointed toward the position of SgrA* and therefore represent the most direct indication for a wind from SgrA*. Summary and Conclusion We have detected an X-ray flare that occurred synchronous to a NIR flare with polarized subflares. This confirms the previous finding (Eckart et al. 2004, Yusef-Zadeh et al. 2006) that there exists a class of X-ray flares that show simultaneous NIR emission with time lags of less than 10 minutes. In addition there are lower energy flare events that are bright in the infrared and are not detected in the X-ray domain (Hornstein et al. 2007). In the framework of a relativistic disk model the May 2007 polarimetric NIR measurements of a flare event with the highest sub-flare-to-flare contrast observed until now, may provide direct evidence for the evolution of a hot spot during the flare. This supports the interpretation of the NIR polarimetry data within a relativistic disk model. Combined with the assumption of spot expansion due to differential rotation, the combined SSC disk model can explain the combined X-raw and NIR data of the July 2004 flare ) and possibly also of the flare from 17 July 2006 reported by Hornstein et al. (2007). The combination of relativistic amplification curves with a simple SSC mechanism allows us zero order interpretations in a time dependent flare emission model. We find that the temporary accretion disk around Sgr A* can well be represented by a multi component model with source Fig.7. In this figure the disk is seen edge-on. Details of expected jet geometries are discussed by . Simultaneous NIR K-and L-band measurements in combination with X-ray observations should lead to a set of light curves that can allow us to prove the proposed model and to discriminate between the individual higher and lower energy flare events. Simultaneous X-ray measurements are important to clearly distinguish between high and low energy events. To Base of the jet/wind from SgrA* Figure 7. A possible source structure for the accretion disk around the SMBH associated with Sgr A*. In this sketch the disk is shown as a vertical thick line to the right. Extending to the left, we show one side above the disk. Higher energy flare emission (lower part) is responsible for the observed NIR/X-ray flare emission. Lower energy flare emission (upper part) may be peaked in the THz domain and may substantially contribute to long wavelength infrared emission. Here we have assumed that the NIR/X-ray contributions are negligible (see more details and model discription in ). In addition to the expansion towards and beyond the mmsource size, radial and azimuthal expansion within the disk may occur. Here λ 2 is the wavelength corresponding to the upper synchrotron cutofffrequency ν 2 . do so, it is required to separate the thermal non-variable bremsstrahlung and the non-thermal variable part of the Sgr A* X-ray flux density. This can only be achieved with a sufficiently high angular resolution in the X-ray regime. This capability is provided by the ACIS-I instrument aboard the Chandra X-ray Observatory and is essential for the proposed observations, especially in the case of weak X-ray flare events in which the X-ray flare intensity is of the order of the extended bremsstrahlung component associated with SgrA* -or even below. These can clearly be identified in combination with infrared data.
An EMT–Driven Alternative Splicing Program Occurs in Human Breast Cancer and Modulates Cellular Phenotype Epithelial-mesenchymal transition (EMT), a mechanism important for embryonic development, plays a critical role during malignant transformation. While much is known about transcriptional regulation of EMT, alternative splicing of several genes has also been correlated with EMT progression, but the extent of splicing changes and their contributions to the morphological conversion accompanying EMT have not been investigated comprehensively. Using an established cell culture model and RNA–Seq analyses, we determined an alternative splicing signature for EMT. Genes encoding key drivers of EMT–dependent changes in cell phenotype, such as actin cytoskeleton remodeling, regulation of cell–cell junction formation, and regulation of cell migration, were enriched among EMT–associated alternatively splicing events. Our analysis suggested that most EMT–associated alternative splicing events are regulated by one or more members of the RBFOX, MBNL, CELF, hnRNP, or ESRP classes of splicing factors. The EMT alternative splicing signature was confirmed in human breast cancer cell lines, which could be classified into basal and luminal subtypes based exclusively on their EMT–associated splicing pattern. Expression of EMT–associated alternative mRNA transcripts was also observed in primary breast cancer samples, indicating that EMT–dependent splicing changes occur commonly in human tumors. The functional significance of EMT–associated alternative splicing was tested by expression of the epithelial-specific splicing factor ESRP1 or by depletion of RBFOX2 in mesenchymal cells, both of which elicited significant changes in cell morphology and motility towards an epithelial phenotype, suggesting that splicing regulation alone can drive critical aspects of EMT–associated phenotypic changes. The molecular description obtained here may aid in the development of new diagnostic and prognostic markers for analysis of breast cancer progression. Introduction About 90% of human malignancies are carcinomas, tumors of epithelial origin [1]. The early steps in carcinoma metastasis often bear a striking resemblance to developmental programs involving Epithelial-to-Mesenchymal Transition (EMT), a process that converts organized epithelial cells into isolated, migratory cells with a mesenchymal morphology [2]. A growing body of work implicates EMT-like mechanisms in tumor cell invasion and dissemination in experimental systems and, recently, in human cancer [3,4]. Normal epithelia are comprised of cells with aligned apical-basal polarity that are interconnected laterally by several types of junctions, including adherens junctions (AJs), which play important roles in establishing and regulating cell-cell adhesion [5]. During EMT, apico-basolateral polarity is lost, cell-cell junctions dissolve and the actin cytoskeleton is remodeled to endow cells with mesenchymal characteristics, including an elongated, migratory and invasive phenotype. Importantly, as a consequence of EMT cells may escape tumors, invade the surrounding tissue and migrate towards blood-or lymphatic vessels guided by the cells and extracellular matrix present in their microenvironment [6]. While EMT is thought to promote carcinoma invasion and metastasis, it is clear that other mechanisms for carcinoma progression exist [3,7], and direct in vivo evidence linking EMT to metastasis in clinical subjects has been challenging to obtain. Some studies have shown that a poor clinical outcome correlates with markers of EMT progression [8][9][10][11]. Conversely, some reports have identified carcinoma cells in primary and metastatic lesions with well-differentiated epithelial morphology [7,12]. Detection of EMT in vivo during metastasis is complicated further by a reverse process, Mesenchymal-to-Epithelial transition (MET), that is also important during embryonic development and is thought to occur during metastatic colonization at secondary sites [13]. New approaches are needed to detect EMT and MET during metastatic progression and to clarify their clinical significance [14,15]. The molecular mechanisms underlying EMT have been studied extensively in the last decade. EMT-inducing growth factors can trigger signaling cascades that activate a network of transcription factors, including Snail, ZEB-1, Goosecoid, FOXC2, Twist and others [16], that orchestrate the EMT program. Ectopic expression of a number of the EMT-associated transcription factors can initiate the program as well. Twist, a potent EMT driver, was identified originally as an inducer of mesoderm formation in Drosophila [17]. Ectopic Twist expression in epithelial cells results in loss of E-cadherin-mediated cell-cell adhesion, acquisition of mesenchymal markers and increased motility of isolated cells [18], a hallmark of the mesenchymal phenotype. EMT is also likely regulated by post-transcriptional mechanisms including alternative pre-mRNA splicing. Alternative splicing expands the diversity of the proteome by producing multiple mRNA and protein isoforms per gene [19]. More than 90% of human genes are estimated to undergo alternative splicing, with a majority of alternative splicing events exhibiting tissue-specific splicing differences [20]. A variety of cancer-associated genes express alternatively spliced isoforms [21], indicating that regulation at the level of splicing may play important roles in cancer onset and progression. Alternative splicing of FGFR2 correlates with EMT in rat bladder carcinoma cells, where mutually exclusive inclusion of one of two exons defines the ligand binding specificity of the receptor during EMT [22]. ENAH (also known as Mena), an actin cytoskeleton regulatory protein, contains a small coding exon 11a that is included only in epithelial cells and excluded in mesenchymal cell lines and during EMT [23,24]. Alternative splicing of p120catenin (CTNND1) generates protein isoforms that display opposite effects on cell motility in epithelial and mesenchymal cells [25]. Recently, two epithelialspecific RNA binding proteins, ESRP1 and ESRP2, homologs of the nematode splicing factor Sym-2, were identified in a screen for regulators of FGFR2 splicing [24]. The RBFOX2 splicing factor has recently been demonstrated to regulate subtype-specific splicing in a panel of breast cancer cell lines [26]. The ESRPs and RBFOX2 promote epithelial splicing of a number of transcripts (including FGFR2 and ENAH), some of which play important roles in EMT [24,27]. Loss of ESRPs in epithelial cells induces some EMT-like changes in cell morphology [28]. However, the full extent of alternative splicing during EMT and its functional consequences to cell phenotype has yet to be elucidated. We used an established in vitro model of EMT to evaluate the amount of gene expression and alternative splicing changes during EMT. Using deep sequencing analysis of the transcriptomes of epithelial and mesenchymal cells, we discovered a global alternative splicing program that alters splicing of key regulators of cell phenotype, including proteins that control cell adhesion and cytoskeletal dynamics. Our analysis indicates that EMT-associated splicing is likely regulated by several splicing factors, including the ESRPs and members of the RBFOX, CELF, MBNL, and hnRNP classes of splicing factors. We found that partial induction of the epithelial splicing program in mesenchymal cells via ectopic expression of ESRP1 or by depletion of RBFOX2 conferred epithelial properties to mesenchymal cells, supporting a key role for alternative splicing during MET. Multiple EMT-associated alternative splicing events were identified in breast cancer cell lines and in primary human breast cancer samples where epithelial and mesenchymal splicing patterns were negatively correlated. This EMT-associated splicing signature likely represents a broadly conserved program involved in the acquisition of mesenchymallike phenotypes in vivo that could be used to detect EMT in primary human cancers with a potentially significant prognostic value. Large-scale changes in gene expression accompany EMT To assess gene and alternative mRNA isoform expression during EMT, we utilized an in vitro model in which mammary epithelial cells (HMLE) expressing Twist fused to a modified estrogen receptor (ER) undergo EMT when the fusion protein is activated by addition of the ER ligand 4-hydroxytamoxifen (4-OHT; tamoxifen) [29]. Untreated HMLE/Twist-ER epithelial cells maintained highly organized cell-cell adhesions and cell polarity ( Figure 1A). Following tamoxifen treatment, the cobblestone-like appearance of HMLE/Twist-ER cells was replaced by a spindle-like, fibroblastic morphology, consistent with previously published results ( Figure 1A; [29]). This morphological transformation represents one of the hallmarks of EMT. As expected, phenotypic changes coincided with a change in expression of canonical EMT markers, including loss of E-cadherin and induction of N-cadherin, Fibronectin and Vimentin expression ( Figure 1B). Tamoxifen competes with estrogen for binding to ER to form a complex that translocates into the nucleus where it recruits co-repressors of transcription, thus preventing activation of ER downstream targets [30]. Since HMLE cells do not express endogenous ER ( Figure S1B), EMT induction in HMLE/Twist-ER cells is likely initiated exclusively by downstream targets of Twist, making HMLE/Twist-ER cells a useful in vitro model of EMT. To obtain an in-depth analysis of gene expression and splicing changes during EMT, we collected mRNA from untreated (epithelial) and from tamoxifen-treated (mesenchymal) HMLER/ Twist-ER cells. Deep sequencing of fragments of polyA-selected mRNAs (RNA-Seq) was used to obtain a digital inventory of gene Author Summary Epithelial-to-mesenchymal transition (EMT) is the process by which cancer cells lose their epithelial characteristics and obtain a mesenchymal phenotype that is thought to allow them to migrate away from the primary tumor. A better understanding of how EMT is controlled would be valuable in predicting the likelihood of metastasis and in designing targeted therapies to block metastatic progression. While there have been many studies on the contribution of changes in gene expression to EMT, much less is known regarding the role of alternative splicing of mRNA during EMT. Alternative splicing can produce different protein isoforms from the same gene that often have distinct activities and functions. Here, we used a recently developed method to characterize changes in alternative splicing during EMT and found that thousands of multi-exon genes underwent alternative splicing. Alternative isoform expression was confirmed in human breast cancer cell lines and in primary human breast cancer samples, indicating that EMT-dependent splicing changes occur commonly in human tumors. Since EMT is considered an early step in metastatic progression, novel markers of EMT that we identified in human breast cancer samples might become valuable prognostic and diagnostic tools if confirmed in a larger cohort of patients. and mRNA isoform expression ( Figure 1A). Between 27 million and 30 million 39-base-pair (bp) cDNA fragments were sequenced from each sample ( Figure S1A). Sequenced cDNA fragments (reads) were mapped to the human genome (hg18 version) and to a splice junction database derived from AceView annotation [31]. In total, ,75% of reads mapped uniquely to the genome or to splice junctions, allowing up to 2 mismatches. Less than 1% of total reads mapped uniquely to rRNA sequences (data not shown). Read density (coverage) was over 400-fold higher in exons than in introns or intergenic regions ( Figure S1C), indicating that most reads derived from mature mRNA. We first estimated gene expression changes during EMT using 'Reads Per Kilobase of Exon Model per Million Mapped Reads' (RPKM), a measure of expression that reflects the molar concentration of a transcript in the sample by normalizing read counts for mRNA length and for the total read number in the sample [32]. Applying both a statistical cut-off based on Audic-Claverie statistics for read-based expression profiling [33] and requiring a minimum 3-fold change, we observed that ,2,060 genes were downregulated, while ,950 were upregulated in EMT ( Figure S2A), indicating a large-scale reorganization of the transcriptome during this process in agreement with recently published data [34]. As expected, E-cadherin was downregulated, while N-cadherin was upregulated during EMT [18]; actin transcript levels remained unchanged ( Figure S2A). These observations revealed that Twist-induced EMT is accompanied by massive changes in gene expression similar to those observed in developmental EMT [35]. Gene ontology (GO) enrichment analysis of up-and downregulated genes was used to gain insight into functional significance of the EMT-driven expression changes. Genes involved in epithelial cell differentiation, encoding components of cell cycle machinery and cell-cell junction components, were downregulated during EMT ( Figure S2B). Concomitantly, genes associated with cell-matrix adhesion, extracellular matrix organization and cell motility, were upregulated ( Figure S2C). Thus, the most significant EMT-driven changes in gene expression are associated with gene categories involved in the phenotypic conversion that occurs during EMT, in agreement with previously published data [36]. Alternative isoform expression is grossly affected in EMT To explore the extent of regulated RNA processing during EMT, we examined eight common types of alternative isoform expression events, each capable of producing multiple mRNA isoforms from a gene through alternative splicing, alternative cleavage and polyadenylation (APA) and/or alternative promoter usage ( Figure 1D). These eight types of events included: skipped exons (SE), retained introns (RI), mutually exclusive exons (MXEs), alternative 59 and 39 splice sites (A5SS and A3SS), alternative first exons (AFE), alternative last exons (ALE) and tandem 39 untranslated regions (tandem 39 UTRs). A comprehensive set of ,136,000 events of these eight types was derived from the AceView gene annotations [31]. The fraction of mRNAs that contained an alternative exon -the 'percent spliced in' (PSI or Y) value -was estimated by the ratio of the density of inclusion reads to the sum of the densities of inclusion reads and exclusion reads, with a variant of this method used for tandem 39 UTRs, as described previously [20]. Thus, Y values range from ,0, indicating predominant exclusion of an alternative exon from mRNAs, to ,1, indicating predominant inclusion of the exon. The extent of EMT-specific regulation of these events was assessed by comparison of the mesenchymal (post-EMT) to the epithelial (pre-EMT) RNA-Seq data ( Figure 1D). In all, for ,40% of genes with documented alternative isoforms, both isoforms were detected by RNA-Seq reads. Of the events where both isoforms were detected, about 1 in 10 skipped exons and 1 in 20 mutually exclusive exons exhibited a significant change in Y value .10%, with hundreds of alternative splicing events of other types also regulated at this level ( Figure 1D). At the gene level, 4.5% of genes contained an event(s) with an absolute change in Y value greater than 10% during EMT, and 2% of genes contained an event(s) with a Y value change greater than 30% (Table S1). These data indicate that a substantial change in splicing accompanies EMT. To confirm the accuracy of RNA-Seq analysis of alternative splicing during EMT, a subset of SE and MXE events was chosen from the set with False Discovery Rate (FDR) below ,0.05 and |DY|.0.1 for semi-quantitative RT-PCR (sqRT-PCR) analysis using cDNA from cells before and after EMT induction. Alternative splicing events with |DY|.0.1 between human tissues are enriched for evolutionarily conserved sequences surrounding the alternative exons as compared to constitutive exons, suggesting that use of this cutoff enriches for functional events [20]. The tested subset included 37 alternative exons that showed relatively large changes in splicing based on the analysis of the RNA-seq data, or whose host genes encoded functionally interesting molecules with respect to EMT (e.g., adhesion molecules). This subset also included a few events that showed relatively small changes in isoform expression in order to assess the robustness of our statistical test. In all cases, the change in splicing DY ( = Y M 2Y E ) detected by RT-PCR was in the same direction as that determined by RNA-Seq ( Figure S1D), and in 78% of cases, the change in Y observed by sqRT-PCR was 20% or higher. Altogether, a strong concordance (R 2 = 0.86; Figure S1D) was observed between splicing changes detected by RNA-Seq and measurements by sqRT-PCR. The high validation rate and quantitative concordance by an independent method (sqRT-PCR) support the reliability of the alternative splicing events identified by the RNA-seq analysis. Genes with altered splicing during EMT showed strong enrichment for involvement in biological processes related to the regulation of the actin cytoskeleton, cell-cell junctions, regulation were induced to undergo EMT by addition of tamoxifen into the culture media. mRNA was collected before EMT induction (epithelial sample) and after EMT induction (mesenchymal sample). cDNA pools from both samples were deep sequenced (RNA-Seq) and analyzed (See Materials and Methods). (B) Western blot analysis of N-cadherin, E-cadherin, fibronectin and vimentin expression with antibodies as indicated in cell lysates that were obtained before (1-untreated) and after (2-tamoxifen-treated) induction of EMT in HMLE/Twist-ER cells. atubulin was used as a loading control. (C) Gene ontology enrichment analysis bar graph of changes in alternative splicing events with |DY|. = 10% between samples. Gene ontology 'biological process', GO_BP_FAT, annotation is indicated in red on the y axis. KEGG Pathway (http://www.genome.jp/kegg/) annotation is indicated in blue on y axis. Benjamini FDR (2log10) is indicated on the x axis. Vertical dotted line marks Benjamini FDR = 0.05. (D) Column 1 shows different kinds of splicing events that have been analyzed. Columns 2-5 show the number of events of each type: (2) all known events based on AceviewAceView annotation; (3) events with both isoforms supported by RNA-Seq reads; (4) events detected at a False Discovery Rate (FDR) of 5% with DY . = 10% between samples; (5) events detected at an FDR of 5% with DY. = 30% between samples. doi:10.1371/journal.pgen.1002218.g001 of cell migration and wound healing. Pathway analysis using KEGG and GO detected enrichment of EMT-associated alternative splicing events in the Wnt, Ras and Insulin pathways ( Figure 1C). These enriched terms suggested that alternative splicing plays a role in pathways that direct morphological and motility-related changes associated with EMT. Interestingly, although the types of gene functions most affected at the splicing and expression levels were largely similar, the actual sets of genes undergoing splicing-level and expression-level changes did not overlap more than expected by chance ( Figure S3). This observation suggests that the EMT splicing program functions in a manner that is parallel to the transcriptional program and that gene expression and alternative splicing, may coordinately drive changes to specific aspects of cell morphology related to the cytoskeleton, cell adhesion and cell motility. Regulatory motifs and factors associated with the EMT splicing program A substantial shift in the levels or activity of the major splicing factors likely underlies the large-scale program of splicing changes that occur during EMT. To explore the nature of this shift, we first analyzed the incidence of oligonucleotide motifs occurring in regulated alternative transcripts. As most splicing factors bind short RNA oligomers a few bases long, we identified pentanucleotides (5mers) that were enriched in regions adjacent to the splice sites involved in the splicing of exons induced or repressed upon EMT ( Figure 2A). This analysis identified a few dozen 5mers enriched in each region relative to control alternative introns, including motifs corresponding to the RBFOX, CELF, ESRP and MBNL families of tissue-specific factors, as well as motifs for several heterogeneous nuclear ribonucleoprotein (hnRNP) factors, including hnRNPs F and H, PTB/hnRNP I, and hnRNP L (Table S2, S3). A subset of these motifs was specifically enriched adjacent to exons whose Y values increased following EMT relative to exons whose splicing did not change ( Figure 2A). These included motifs associated with RBFOX and ESRP splicing factors and with hnRNPs F/H and L. An overlapping subset of motifs were enriched adjacent to exons whose Y values decreased following EMT, again including motifs associated with the RBFOX and hnRNP F/H families and also motifs associated with PTB and MBNL family proteins ( Figure 2A). Several 5-mers, without clear RNA binding protein partners, that may represent binding sites of uncharacterized splicing regulators in EMT were also identified. We also examined changes in the expression of RNA binding protein (RBP) genes. The most striking changes in RBP expression occurred for the related epithelial specific splicing factors ESRP1 (RBM35A) and ESRP2 (RBM35B) [24]. During EMT, the expression of these factors decreased by ,90-fold and ,35-fold, respectively, from relatively high initial levels ( Figure 2B). Motif enrichment for ESRP splicing factors was observed in the upstream sequence of cassette exons upregulated during EMT ( Figure 2A) consistent with the recent observation that ESRP binding sites are present at greater numbers upstream of silenced exons, than included exons [28]. As ESRPs are downregulated during EMT, these silenced exons are relieved from ESRP inhibition and thus appear upregulated. Splicing factor activity often switches between positive and negative regulation depending on the location of binding relative to the regulated exon. RBFOX family splicing factors tend to enhance splicing when bound downstream and to repress splicing when bound upstream of alternative exons [27]. The observed pattern of enrichment of RBFOX motifs downstream of exons whose inclusion increased during EMT and upstream of exons whose inclusion decreased (Figure 2A) is therefore consistent with an increase in the activity of RBFOX family factors during EMT. Expression of the RBFOX2 gene increased moderately but significantly by about 15% following EMT (Table S4) while at the same time splicing of a MXE encoding the RNA-binding domain of the RBFOX2 protein increased by about 20%. Thus, these changes together should increase the levels of splicing-active RBFOX2 mRNA by at least a third. Recently, it has been suggested that RBFOX2 activity plays a role in regulating a set of breast cancer subtype-specific alternative splicing events [26]. The expression levels of many other RBPs associated with motifs enriched near EMT-regulated exons changed during EMT ( Figure 2B, Table S4), including downregulation of the splicing repressor PTBP1 (PTB/hnRNP I) by ,2.5-fold, downregulation of the PTB-associated splicing co-repressor RAVER1 by ,4-fold, and downregulation of the myotonic dystrophy-associated splicing factors MBNL2 and MBNL3 and hnRNP F by ,1.6-to 2.5-fold. These observations suggested that changes in the levels and activity of several different splicing factors may contribute to the splicing changes observed in EMT. To explore the potential contributions of splicing factors to EMT-regulated alternative splicing, we analyzed published crosslinking/immunoprecipitation-sequencing (CLIP-Seq) data from human cell lines. Dozens of EMT-regulated skipped exons were associated with RBFOX2 CLIP-Seq clusters, and hundreds were associated with PTB CLIP-Seq clusters ( Figure 2C; [27,37]). In addition, a fraction of the observed EMT-regulated splicing events overlapped with a set of ESRP1-regulated exons recently identified by Carstens and coworkers using RNAi and a splicing-sensitive microarray analysis ( Figure 3C; [28]). Together, the RNAi and CLIP-Seq data demonstrate the potential for regulation of a substantial portion -perhaps a majority of EMT-regulated exons -by these three factors. Thus, our data are consistent with a model in which several splicing factors collaborate in the regulation of splicing during EMT, adding a layer of post-transcriptional regulation to the EMT program. EMT-associated alternative transcripts correlate with the phenotype of breast cancer cell lines Alternatively spliced mRNA isoforms that exhibit EMTassociated changes in exon inclusion might serve as valuable prognostic markers for metastatic disease, since EMT is considered an early event in metastatic progression. As an initial step towards eventual analysis of primary human samples, we assessed alternative isoform expression in a panel of human breast cancer cell lines of luminal (generally poorly metastatic) and basal-like origin (generally aggressive and metastatic). Luminal cell lines, like MCF7 and T47D, express high levels of epithelial markers including E-cadherin, while basal-like cell lines express mesenchymal markers including N-cadherin, vimentin and fibronectin [14]. In addition, in our analysis we included two cell linesderivatives of MDA-MB-231 cell metastases to the brain and bone -that exhibited a more aggressive phenotype compared to the parental MDA-MB-231 cells [38]. We hypothesized that splicing events with high epithelial inclusion (i.e. high inclusion in the pre-EMT/epithelial sample) would be expressed in luminal breast cancer cell lines, and conversely that splicing events with high mesenchymal inclusion (defined analogously) would be expressed in basal-like cell lines. A quantitative RT-PCR (qRT-PCR) analysis of nine skipped exons demonstrating the largest DY in the validated set of 37 alternative splicing events, using cDNA from the panel of luminal and basal-like cell lines, indicated that four epithelial inclusion events, in the SLC37A2, KIF13A, FLNB, and MBNL1 genes, were included at high frequency in luminal cell lines, whereas inclusion of these events was low in basal-like cells compared to T47D epithelial cells ( Figure 3A). Conversely, five mesenchymal-enriched inclusion events in the PLEKHA1, MLPH, ARHGEF11, CLSTN1 and PLOD2 genes were enriched in basal-like cell lines with only low inclusion levels in luminal cells relative to BT549 mesenchymal cells ( Figure 3B), consistent with recently published results [28]. Thus, taken together, epithelial inclusion events were identified in corresponding mRNA transcripts in luminal cells and were detected at very low levels in basal-like cells, while mesenchymal inclusion events were detected at low levels in luminal cells but showed a high inclusion ratio in basal-like cells ( Figure 3C, 3D). Therefore, the qRT-PCR analysis of skipped exons using cDNA from a panel of luminal and basallike breast cancer cell lines detected EMT-associated splicing events, as predicted by the RNA-seq analysis of Twist-induced EMT. To explore the expression of EMT-associated alternative splicing events in breast cancer cell lines further and to determine whether EMT-associated alternative exons could classify breast cancer cell line subtypes, we compared the expression of skipped exons (SEs) from our EMT RNA-seq analysis to available exon array data from luminal and basal B breast cancer cell lines in the NCI-60 panel [39]. Unsupervised hierarchical clustering of exon array data relating to 307 EMT-associated SE events (|DY|.0.1, FDR,0.05; foreground set) detected by the array, segregated basal B cell lines from luminal cell lines with only two basal cell lines (MDA-MB-436 and SUM149) misclassified in the luminal cluster ( Figure 4A). In contrast, clustering of the exon array data using the background set of 8839 events resulted in cell line subtype classification with nine misclassifications, indicating lack of intrinsic bias in the whole set of analyzed events and that the SE events identified by our EMT RNA-Seq better classify the luminal and basal B cell lines. Furthermore, a randomized-clustering procedure demonstrated that the clustering classification using our set of SE events was statistically significant (p-value = 0.0014). Therefore, the EMT-associated splicing program identified by our RNA-seq analysis is conserved in breast cancer cell lines and correlates with their invasive and metastatic properties. We hypothesized that some of the heterogeneity in splicing observed across the cell lines stemmed from cell type specific splicing events that may not be linked directly to EMT regulation ( Figure 4A). To find the ''core'' EMT alternative splicing signature that can unambiguously distinguish between breast cancer cell line subtypes, we compared EMT-driven SE events to the SE events that were differentially regulated between the luminal and basal B cell lines. Of the SE events represented on the array that changed significantly in our EMT RNA-Seq dataset (|DY|.0.1, FDR,0.05), a total of 24 events changed significantly between luminal and basal B cell lines at an FDR,0.25. Of these, 19 (79%) changed in a ''coherent'' manner in the sense that the change in exon inclusion was in the same direction between mesenchymal and epithelial samples in the EMT RNA-seq dataset as between basal B and luminal cell lines in the exon array dataset ( Figure S4). Interestingly, coherence increased for events that changed more dramatically in the EMT RNA-Seq dataset, with 11 (100%) of SE events (RNA-seq |DY|.0.3) exhibiting coherence between the two datasets ( Figure S4). Notably, clustering analysis of luminal and basal B breast cancer cell lines using 19 coherent SE events demonstrated that luminal cell lines could be unambiguously distinguished from basal B cell lines based exclusively on these splicing events alone ( Figure 4B). These ''core'' EMT-associated alternative splicing events may comprise a common program that contributes to the phenotypic changes that endow cancer cells with invasive and metastatic capabilities. Alternative isoforms detected in the in vitro EMT model are expressed in primary human breast cancer samples To determine whether the alternative mRNA isoforms confirmed in human breast cancer cell lines are relevant to human disease, we assessed expression of these events in fine needle aspiration (FNA) biopsies from breast cancer patients. FNA is the least invasive available method of collecting diagnostic material from patients with breast mass. This procedure is performed using a small gauge needle that gently disrupts the tissue and allows loose tumor cells to travel up the needle via capillary action. The FNA sample is usually enriched in tumor cells and can be analyzed by qRT-PCR [40]. However, due to the small volume of the sample, RNA recovery is low -tens of nanograms of total RNA at most. As expected, a subset of patient FNA spreads contained tumor cells that appeared cohesive and tightly attached to each other, typical of benign ductal lesions, while another subset of FNA smears from invasive ductal carcinomas (IDCs) contained discohesive populations of enlarged tumor cells ( Figure 5A), typical for a highly invasive phenotype. Analysis of 15 random FNA smears from IDCs used in this study for the percentage of tumor, inflammatory and stromal cells demonstrated an almost a complete absence of adipocytes, macrophages and inflammatory cells ( Figure S5), indicating that all of the cells present in FNA samples were ductal cancer cells. Therefore, the phenotypic characteristics of FNA collected samples indicated that they represent an appropriate human sample for assessment of alternative mRNA transcript expression found in our in vitro screen for EMT-associated splicing. To check expression of alternative mRNA isoforms, we obtained FNA samples from 40 patients with IDCs of various grades and growth hormone receptor status. IDCs in patients were classified as well, moderately or poorly differentiated according to the modified Bloom Richardson scale. The clinical and demographic data including patients' age, tumor size, lymph node status, estrogen, progesterone and Her2/neu receptor status were also collected (Table S5). Using the cDNA from 40 IDC samples, we determined inclusion ratios for six SE events that exhibited the largest change in exon inclusion levels based on the analysis of breast cancer cell lines. These included epithelial inclusion events in ENAH, MBNL1, FLNB and SLC37A2, and mesenchymal inclusion events in MLPH and ARHGEF11 ( Figure 5B). The small amount of RNA isolated from FNA samples permitted analysis of only six alternative splicing events per sample. Inclusion ratios of splicing events in IDC samples were normalized to the average inclusion ratio of the same splicing event measured in six fibroadenoma (FA) samples. For each pair of splicing events, the Pearson correlation between normalized inclusion ratios of the two splicing events across 40 IDC samples was calculated and used for clustering analysis to assess the relationships between events ( Figure 5C). Interestingly, ENAH and SLC37A2 as well as MLPH and ARHGEF11 inclusion events were highly correlated. Some epithelial and mesenchymal inclusion events were inversely correlated, e.g., increases in FLNB inclusion tended to be associated with decreases in inclusion of the ARHGEF11 alternative exon. Little or no correlation was observed between SLC37A2 and MLPH inclusion events. Overall, many IDCs expressed the mesenchymal mRNA isoforms, indicating that EMT-associated splicing occurs in human tumors in vivo. Unsupervised clustering of splicing ratios of six alternative exons in 34 FNA samples demonstrated a significant correlation between the two mesenchymal markers, MLPH and ARHGEF11, and between the four epithelial markers, ENAH, SLC37A2, FLNB and MBNL1, while epithelial and mesenchymal marker groups exhibited anti-correlation ( Figure S6A). Approximately Unbiased (AU) p-values obtained from the Pvclust analysis (http://www.is. titech.ac.jp/,shimo/prog/pvclust/) were .99%, thus supporting reliability of the clustering tree ( Figure S6B). This result suggests that the IDC samples tended to have either epithelial or mesenchymal splicing patterns but rarely exhibited mixed inclusion patterns, indicating that IDCs could be unambiguously classified into two groups on this basis. The ESRP1 splicing factor confers epithelial-like properties to mesenchymal cells By far the most strongly downregulated RBPs in EMT were the related factors ESRP1 and ESRP2 ( Figure 2B; Table S4). These factors have been proposed to promote an epithelial phenotype by facilitating epithelial-specific splicing of a number of genes, some of which have well documented and essential roles in EMT [24,41]. Silencing of ESRP1/2 in epithelial cells induced Ncadherin expression without affecting E-cadherin levels and led to a slight, but significant, increase in the rate of monolayer wound healing [28]. We hypothesized that expression of ESRP1 in mesenchymal cells would convert a portion of the mesenchymal splicing program to an epithelial state and allow us to examine the role of alternative splicing in the context of Mesenchymal-to-Epithelial Transition (MET). We introduced ESRP1-EGFP into HMLE/pBP-Twist cells, immortalized human mammary epithelial cells that ectopically express Twist [18], and analyzed expression of canonical EMT markers. As expected, control HMLE/pBP epithelial cells expressed high levels of E-cadherin while HMLE/pBP-Twist mesenchymal cells expressed high levels of N-cadherin ( Figure 6A; [18]). Expression of ESRP1 in HMLE/ pBP-Twist cells was sufficient to switch ENAH splicing to an epithelial pattern, as evident by the inclusion of epithelial-specific 11a exon of ENAH ( Figure 6A). However, ESRP1-expressing cells still had high levels of N-cadherin and low levels of E-cadherin. Thus, ESRP1 expression is sufficient to alter splicing of some targets but is not sufficient to alter expression of EMT markers in mesenchymal cells. One important consequence of EMT is altered cell migration. To assess the effect of ESRP1 expression on cell migration qualitatively, we analyzed cell the movement of cells migrating out of a matrigel drop by time-lapse microscopy. This assay is similar to a standard ex vivo EMT assay used in the studies of developmental EMT to assess cell migration of endocardial cushion explants [42]. Cells were reconstituted in a small volume of matrigel and allowed to migrate out of the cell-matrigel drop for 24 hrs ( Figure S7). Almost no difference in migration was observed in 8 hrs between control epithelial cells, mesenchymal cells and the same cells expressing ESRP1. However, by 19 hrs the epithelial HMLE/pBP cells continued to migrate as an epithelial sheet, keeping in tight contact with each other, while HMLE/pBP-Twist mesenchymal cells acquired a spindle-shaped morphology, migrated as individual cells and for a longer distance than epithelial cells during the same time period ( Figure S7). Interestingly, HMLE/pBP-Twist cells expressing ESRP1 became elongated but continued to move in contact with each other. These differences in migration were further manifested at 24 hrs, suggesting that ESRP1 expression conferred epitheliallike properties to the migration of mesenchymal HMLE/pBP-Twist cells ( Figure S7). To analyze the migration characteristics of mesenchymal cells upon ESRP1 expression quantitatively, we utilized an ''in monolayer'' migration assay [43] that evaluates the movement of individual cells within a monolayer in contrast to a ''sheet monolayer'' motility assay which assesses collective cell migration towards an open wound [44]. Epithelial HMLE/pBP Control HMLE/pBP epithelial cells, HMLE/pBP-Twist mesenchymal cells and HMLE/ pBP-Twist cells expressing ESRP1-EGFP were labeled with wholecell tracking dye and plated along with the equivalent unlabeled cell types such that the labeled cells represented 5% of cells within a confluent monolayer to assess migration in the presence of cell-cell contact ( Figure S8A). As expected, epithelial cells exhibited significant movement in a 17 hr cell tracking experiment [43,45], while confluent mesenchymal cells moved a little, if at all ( Figure 6B; Videos S1, S2). Surprisingly, upon expression of ESRP1, mesenchymal cells demonstrated significant locomotion, bypassing their typical contact inhibition of motility and instead resembling the movement of epithelial cells in a monolayer ( Figure 6B; Video S3). Windrose plots of cell movement, where all cell tracks are placed at the same starting point, clearly demonstrated the extent of motion for each cell type ( Figure 6B). While many epithelial HMLE/pBP cells traversed paths of up to 300 mm in length, mesenchymal HMLE/pBP-Twist cells moved less than 100 mm. Interestingly, many ESRP1 expressing mesenchymal cells exhibited intermediate range of motion of about 200 mm ( Figure 6B). Analysis of the cell movement parameters revealed that the speed of ESRP1expressing cells was significantly increased compared to the speed demonstrated by mesenchymal cells without ectopic ESRP1 expression ( Figure 6C). The total path and overall displacement of HMLE/pBP-Twist/ESRP1 cells were also increased significantly ( Figure S8B). The most likely explanation of this data is that splicing changes resulting from ESRP1 expression are sufficient to shift the migration properties of mesenchymal cells towards an epithelial-like phenotype. However, we cannot rule out indirect effects or possible uncharacterized functions of ESRP1. The actin organization and structure of cell-cell contacts have a substantial effect on the migration of cells within monolayers. To characterize phenotypic changes underlying differences in cell migration behavior of epithelial HMLE/pBP cells, mesenchymal HMLE/pBP-Twist cells, and HMLE/pBP-Twist cells expressing ESRP1, immunofluorescence analysis was used to visualize actin organization and cell-cell junctions ( Figure 7A; Figure S9). As expected, three-dimensional structured illumination microscopy revealed the presence of circumferential actin belt in epithelial cells, while actin stress fibers prevailed in mesenchymal cells ( Figure 7B). Interestingly, actin organization was altered in mesenchymal cells upon expression of ESRP1. While some stress fibers were present in the central part of the cell, prominent accumulation of peripheral circumferential actin, characteristic of epithelial cell morphology, was also observed. p120catenin, a marker for cell-cell adhesions, decorated areas of cell-cell contact in HMLE/pBP cells, while in HMLE/pBP-Twist cells p120catenin localization was barely visible at cell contact points and could be observed only in areas where adjacent cells overlapped without forming obvious junctions ( Figure 7B). Expression of ESRP1 led to increased recruitment of p120catenin to the sites of cell-cell adhesion ( Figure 7B). The tight junction marker ZO-1 as well as alpha-catenin localized to actin filaments that perpendicularly terminated at cell-cell borders in immature cell-cell junctions of epithelial cells. In contrast, ZO-1 and alpha-catenin localized to the sites of focal cell-cell contact at the ends of stress fibers in mesenchymal cells ( Figure 7A; Figure S9A). Interestingly, expression of ESRP1 in mesenchymal cells led to patterns of ZO-1 and alpha-catenin resembling their localization in epithelial cells. Thus, ESRP1 expression in mesenchymal cells partially reverted actin organization and cell-cell junction morphology towards the epithelial phenotype. A defining feature of epithelia and endothelia is to separate compositionally distinct fluid phase compartments by providing a barrier to ion and solute passage, a prerequisite for the development of most organ systems in vertebrates [46,47]. To assess whether the change in actin organization and cell-cell junction morphology in mesenchymal cells upon expression of ESRP1 would have functional consequences, we compared the ability of fluorescently tagged dextran to cross a confluent monolayer of epithelial HMLE/pBP cells, mesenchymal HMLE/pBP-Twist cells and the same cells expressing ESRP1. As expected, permeability of the HMLE/pBP-Twist cell monolayer was almost two-fold higher than permeability of the HMLE/ pBP cell monolayer ( Figure 7C). Strikingly, expression of ESRP1 in HMLE/pBP-Twist cells increased their barrier function significantly, resulting in permeability that was less then 1.5 fold higher than control epithelial cells ( Figure 7C). Thus, expression of ESRP1 lead to a substantial increase in barrier function of mesenchymal cells, caused by the epithelial-specific splicing induced in mesenchymal HMLE/pBP-Twist cells. These results suggest that ESRP1-mediated splicing changes may drive epithelial-like re-organization of peripheral actin and cell-cell junctions that underlie barrier function. Depletion of RBFOX2 in mesenchymal cells leads to a partial reversion towards epithelial phenotype As noted above, our analysis along with published data [26] suggest that the RBFOX2 splicing factor likely controls a substantial subset of EMT-dependent alternative splicing (Figure 2A, 2C). To assess the effect of RBFOX2 depletion on cell phenotype, we treated HMLE/pBP-Twist mesenchymal cells with scrambled shRNA or with shRNA targeting RBFOX2. qRT-PCR analysis demonstrated ,80% depletion of RBFOX2 mRNA ( Figure S10A), while RBFOX2 protein levels became virtually undetectable ( Figure S10C). RT-PCR analysis of the known RBFOX2 targets FAT and PLOD2 [26] confirmed changes consistent with depletion of RBFOX2 activity. In mesenchymal cells treated with RBFOX2 shRNA, FAT alternative exon inclusion was reduced from 40% to 5%. A less dramatic but significant effect on exon inclusion was also observed for the PLOD2 alternative exon ( Figure S10B). Expression of many EMT markers was unaffected by RBFOX2 depletion. No difference in expression was observed for Ncadherin and fibronectin compared to scrambled shRNA-treated control cells ( Figure S10C). However, vimentin levels were reduced, indicating a partial loss of the mesenchymal expression program in HMLE/pBP-Twist cells upon RBFOX2 knockdown. Immunofluorescence analysis revealed that RBFOX2 depletion in mesenchymal HMLE/pBP-Twist cells shifted their morphology from spindle-shaped to cobblestone-like, resembling epithelial cell morphology ( Figure S10D). Stress fibers, prominent in HMLE/ pBP-Twist cells, were not readily observed after RBFOX2 depletion. Junctional markers like ZO-1, p120catenin and alphacatenin brightly decorated cell-cell contacts, suggesting that cell junctions were formed in these cells in contrast to HMLE/pBP-Twist mesenchymal cells, where these markers were barely visible at sites of cell-cell contact ( Figure S10D). Qualitative assessment of cell migration properties using a matrigel drop assay described above demonstrated that HMLE/pBP-Twist cells expressing a scrambled shRNA exhibited an individual cell migration pattern and scattered in 24 hrs of plating characteristic of mesenchymal cells. In contrast, cells expressing RBFOX2 shRNA migrated as a sheet, staying in contact with each other ( Figure S10E). Together, these data suggests that, similar to ectopic ESRP1 expression, knockdown of RBFOX2 conferred a number of epithelial features to mesenchymal cells, presumably by shifting their splicing program from mesenchymal to partially epithelial. Discussion In the present study, we profiled the transcriptome of human mammary epithelial cells induced to undergo EMT by activation of Twist, a transcription factor important for EMT induction during embryonic development and metastasis. Using this system, we observed an EMT-associated global change in alternative splicing of a number of genes that are involved in functions crucial for EMT progression, such as cell adhesion, cell motility, and cytoskeletal remodeling. Several of the splicing changes discovered in vitro were also found to occur in a panel of breast cancer cell lines and in vivo in primary human breast cancer samples. We also demonstrated that expression of an epithelial specific splicing factor, ESRP1, was sufficient to cause a substantial shift in the actin organization, migration properties and barrier function of mesenchymal cells towards the epithelial phenotype, while depletion of the splicing factor RBFOX2 also conferred some epithelial properties to mesenchymal cells. Altogether, the present evidence leads us to propose that alternative splicing plays a major role in EMT and tumor progression by changing alternative isoform expression of genes important for epithelial and mesenchymal cell morphology and motility. Changes in alternative splicing contribute to pathological EMT Transcriptional regulation of EMT has been a focus of numerous studies in cancer cell lines and primary tumor samples in the last decade [15]. A number of transcription factors have been identified that repress key regulators of EMT such as Ecadherin, and induce transcription of the drivers of mesenchymal phenotype, including N-cadherin and vimentin [16,18,48,49]. Changes in alternative isoform expression during EMT have been observed previously only for a handful of genes including FGFR2, p120catenin, ENAH and CD44 [22][23][24][25]50]. Recently, epithelialspecific splicing factors ESRP1 and ESRP2 have been shown to regulate splicing of a subset of genes that contribute to the epithelial phenotype [28]. However, the extent to which coordinated changes in splicing might contribute to phenotypic and morphological changes during EMT has not been investigated systematically. Our results demonstrate that thousands of genes undergo changes in alternative isoform expression during EMT, establishing the existence of a program of alternative RNA processing accompanying EMT. Many of the alternative splicing events we observed may have a major effect on protein functions important for EMT, including regulation of cell migration, cell adhesion and actin cytoskeleton remodeling (See Figure 5B; Table 1). For example, inclusion of alternative exon in the C-terminus of ARHGEF11, a Rho guanine nucleotide exchange factor (GEF) 11, also known as PDZ-RhoGEF, is increased in mesenchymal cells. Interestingly, removal of the C-terminus of ARHGEF11 results in a remarkable increase in its ability to induce RhoA activation in vivo and promotes neoplastic transformation [51]. Furthermore, components of key pathways that control cell motility, invasion and EMT itself are affected by alternative splicing (Table 1), including components of the Wnt and TGF-b signaling pathways. Some RNA regulatory proteins were also affected. For example, increased inclusion of exon 5 of the splicing factor MBNL1 was detected in epithelial cells, a change that occurs in models of myotonic dystrophy and alters the intracellular localization of the protein from cytoplasmic to nuclear [52][53][54]. Interestingly, several previously uncharacterized mRNA isoforms of genes that control important aspects of EMT have been found in this analysis. For example, a 40% increase in inclusion of a 26 aa region in SCRIB (a homolog of Drosophila scribble), involved in regulation of apical-basal polarity and directional migration of epithelial cells [55,56], was observed in mesenchymal cells that might alter a PKC phosphorylation site. This suggests that a cDNA containing this 26 aa exon may not encode the appropriate isoform to use when studying the function of SCRIB in epithelial cells. Altogether, our analysis demonstrates that alternative splicing in EMT leads to changes in protein functions in ways that contribute to the establishment of mesenchymal phenotype, and identifies many widely studied molecules with the potential for significant isoform-dependent functions during EMT. Could key aspects of EMT and/or MET be driven by splicing changes alone, independent of the transcriptional machinery? Recent data suggests that systemic dissemination of tumor cells occurs at early stages of tumor development [57], therefore, targeting MET therapeutically might prove more effective since at the time of diagnosis it may already be too late to successfully target EMT-inducing events. Thus we chose to assess contribution of changes in alternative splicing to MET. Our experiments with ESRP1 and RBFOX2 splicing factors suggest that epithelial splicing induced in mesenchymal cells by expression of ESRP1 or the loss of mesenchymal splicing resulting from depletion of RBFOX2 are not sufficient to convert gene expression into an epithelial pattern. However, mesenchymal cells expressing ectopic ESRP1 or depleted of RBFOX2 exhibited actin organization, barrier function and migration characteristics shifted significantly towards an epithelial phenotype, indicative of a partial MET. Our data along with other reports [58,59] suggest that although transcriptional control is extremely important to drive EMT, alternative splicing is required to execute the complex changes needed for cells to undergo the dramatic phenotypic change from epithelial to mesenchymal states. Comparison of EMT-dependent skipped exon events identified in the current study to ESRP-regulated ones [28,41] revealed that out of ,1500 EMT-dependent events (FDR,0.05), only 116 seem to be regulated by ESRP1,2 ( Figure S11A). Interestingly, ESRP1 expression in clinical samples correlated with the inclusion of an alternative exon of ENAH but no correlation was observed with the presence of lymph node metastasis ( Figure S11B). PTB and RBFOX2 may control a number of EMT-driven splicing events, as evident by a significant overlap between exons associated with CLIP-Seq tags and exons that undergo splicing changes during EMT. However, ESRP1, RBFOX2 and PTB together may regulate only a fraction of all EMT-associated alternative splicing (Figure 2A, 2C; Figure S11A), so it is likely that other splicing factors also play important roles in executing the EMT splicing program. Our RBP motif enrichment analysis suggests involvement of the MBNL family of splicing factors and several hnRNP proteins, including hnRNPs F/H, L and PTB. Potentially, alteration of a combination of ESRP1, RBFOX2 and/or other specific splicing factors could be sufficient to drive many phenotypic aspects of EMT. In other words, epithelial cells might potentially bypass the traditional EMT-inducing transcriptional networks to acquire mesenchymal-like phenotypes, when triggered by global changes in splicing programs that enable an EMT-like transformation. This raises the intriguing possibility that instances where invasion and metastasis occurs without changes in canonical EMT expression markers may arise from splicing-driven phenotypic changes. EMT in primary breast cancers Evidence for EMT in clinical carcinomas has been difficult to obtain, leading to a controversy regarding the role of EMT as a prerequisite for metastasis. Although EMT and MET have been observed in the animal model of prostate cancer [60], the presence of regions of well-differentiated epithelial morphology within some invasive primary tumors and metastatic lesions appears to conflict with a role for EMT in metastatic progression [7]. A number of factors that may account for this discrepancy have been suggested, including: 1) incomplete EMT may be sufficient for cells to metastasize; 2) EMT might only occur in a small number of cells within the tumor mass that would quickly disappear by intravasating into blood or lymphatic vessels; and, 3) after colonization, tumor cells revert to an epithelial morphology at metastatic sites through a reciprocal process of mesenchymal to epithelial transformation (MET) [13,15]. Thus, clinical samples of primary tumor and metastatic nodules often do not show evidence of EMT because the relevant cells display a mesenchymal phenotype only when they are in transit from the primary tumor to the site of mestastasis. Moreover, if indeed only a few cells in the primary tumor undergo EMT prior to migration, RNA from these cells would be diluted by RNA from the luminal parts of the tumor in qRT-PCR analyses. FNA samples seem to be an attractive alternative to assess EMT. FNA of some IDCs, where many cells are loosely attached to the tumor mass, collect motile cells that may already be 'in transit' from the primary tumor to secondary sites, some of which might presumably have undergone EMT. In our analysis of EMT-associated splicing changes in IDCs from breast cancer patients collected by FNA, we identified two groups of IDCs. In one group, inclusion of a set of epithelial splicing events was observed; while in a second group inclusion of mesenchymal splicing events was detected, suggesting a post-EMT phenotype. These data indicate that in some of the IDCs, tumor cells underwent EMT, consistent with the idea that EMT is associated with, and can contribute to cancer progression. We hypothesize that IDCs where mesenchymal splicing events were identified are more likely to metastasize than tumors exhibiting the epithelial splicing pattern, since recent studies suggest that expression of EMT program is associated with poor clinical outcome in some tumor types [61,62]. EMT-associated alternative splicing events as potential prognostic and diagnostic markers for breast cancer metastasis Splicing aberrations have been associated with several diseases, including cancer, where altered splicing can lead to production of protein isoforms with oncogenic properties [63]. A large-scale analysis of alternative splicing in ductal breast tumors of 600 cancer-associated genes identified 41 breast cancer-specific markers that discriminate between normal breast tissue and ductal breast tumors [64]. A number of shared splicing events have been recently demonstrated in a panel of breast and ovarian cancers using a high throughput RT-PCR approach [65]. Exon array analysis was recently used to identify subtype-specific alternative splicing events in a panel of breast cancer cell lines [26]. Therefore, it appears likely that alternative splicing analysis will dramatically increase the pool of potential biomarkers for cancer diagnostics. Since EMT is considered an early event in the metastatic process, splicing changes associated with EMT in particular have the potential to become useful prognostic and diagnostic markers for breast cancer metastasis. Analysis of the EMT-driven splicing events in the NCI-60 panel of breast cancer cell lines [39] demonstrated that many of the EMT-associated alternative isoforms are expressed. Furthermore, luminal and basal B cell lines could be distinguished based solely on their splicing patterns, suggesting that EMT-associated alternative splicing events may serve as useful markers for classification of breast cancer cell lines and potentially of human cancers. Moreover, we identified splicing events that might be considered novel markers of EMT in vivo. Alternative splicing of ENAH, MLPH, ARHGEF11, MBNL1, FLNB and SLC37A2 transcripts have been confirmed in a number of IDC FNAs, suggesting that our EMT-associated splicing signature may have a prognostic or diagnostic potential. However, a mesenchymal splicing pattern did not correlate with the presence of lymph node metastasis. This finding is not surprising since FNA samples were obtained from recently diagnosed cancer patients and no follow up information is available regarding a possible relapse or metastatic status of the tumor. Therefore, we cannot draw a meaningful conclusion about the overall relevance of the splicing pattern in relation to outcome or presence of lymph node metastasis. Although lymph node status is considered an independent prognostic factor for relative survival of breast cancer patients (National Cancer Institute website), the presence of regional lymph node metastases does not always correlate with subsequent distant spread possibly because the mechanisms of hematogenous spread are different from those for lymphatic spread [66]. Within the clinical groups created using tumor size and lymph node involvement, there is a spectrum of disease behavior. Even patients with stage I lymph node negative breast cancer have 15-25% chance of developing distant metastasis, so breast cancers of early stage must be composed of mixed phenotypes that cannot be stratified using standard approaches such as lymph node status, or tumor size [67]. Thus, an EMT splicing signature may help to stratify early stage breast cancers, however additional studies will be required to determine the prognostic potential of EMT-associated splicing events that we have validated in FNA samples. A growing body of evidence suggests that EMT is responsible for acquisition of therapeutic resistance by cancer cells [13]. EMT has been implicated in the generation of cancer cells with stem-like characteristics that have a high tumor-initiating potential [68]. Cancer stem cells have been found enriched in residual breast tumors after chemo-or endocrine therapy [68], in colorectal cancer cells after oxaliplatin treatment [69], or in ovarian carcinoma cells after exposure to paclitaxel [70]. Clinical evidence suggests that expression of mesenchymal markers is increased in breast tumors after letrozole or docetaxcel treatment [68]. This indicates that EMT-associated alternative splicing events that we confirmed in FNA samples from patients with IDCs may potentially become predictive biomarkers that can be used for patient selection and/or provide information early during therapy. Further studies specifically designed to identify alternative splicing markers that reflect distinct breast cancer biology in relation to clinical outcomes and prognoses show promise to improving our understanding of EMT and breast cancer at the molecular level. Cell culture Immortalized human mammary epithelial cells (HMLEs) expressing either the empty pBabe puro vector (pBP), pBP-Twist or pWZL-Twist-ER were obtained from Robert Weinberg's laboratory at the Whitehead Institute for Biomedical Research (Cambridge, MA) and cultured as described previously [71]. 4hydroxy tamoxifen (4-OHT) treatment was performed as described previously [29]. Other plasmids used in this study, procedures used to produce virus, the procedure for the infection of target cells, and the derivation of different cell lines are provided as Text S1. Antibodies, Western blotting, and immunofluorescence Cells were lysed in the presence of 50 mM Tris, pH 8.0, 150 mM NaCl, 0.1% SDS, 0.5% Na-Deoxycholate and 1.0% NP-40 on ice. Twenty micrograms of total protein from each sample were resolved on an 8%-10% SDS-PAGE Gel with Laemmli Running Buffer and transferred to PVDF membranes. The blots were then probed with various antibodies, such as anti-Mena, and anti-Mena-11a, anti-E-cadherin (BD Transduction), anti-Fibronectin (BD Transduction), anti-vimentin V9 (NeoMarkers), or anti-N-cadherin (BD Transduction). Detailed immunofluorescence methods are provided in the Text S1. cDNA library preparation for Illumina sequencing Total RNA was extracted from untreated HMLE/Twist-ER cells (epithelial sample) and after prolonged 4-OHT treatment (mesenchymal sample) using RNeasy Plus Mini kit (Qiagen). Poly-T capture beads were used to isolate mRNA from 10 mg of total RNA. mRNA was fragmented and used for a first-strand cDNA synthesis by random hexamer-primed reverse transcription and subsequent second-strand cDNA synthesis. Sequencing adaptors were ligated using the Illumina Genomic DNA sample prep kit. Fragments 200 bp long were isolated by gel electrophoresis, amplified by 16 cycles of PCR, and sequenced on the Illumina Genome Analyser, as described previously [20]. Computational analyses of RNA-Seq, exon array data, motif analysis, and clustering Computational and statistical methods are described in the Text S1. Briefly, for analysis of RNA-seq data, reads were mapped to the union of the genome and a database of junctional sequences derived from AceView/Acembly annotation. Expression analysis was based on reads that were mapped to constitutive exons among annotated RefGene transcripts of each gene. Splicing analysis was based on read density supporting either isoforms of an alternative splicing event from a database of alternative isoform events. For more details see the Text S1. Alignment and raw sequencing reads were deposited in Gene Expression Omnibus with accession number GSE30290. Reverse transcriptase PCR analysis Total RNA for validation of splicing events in HMLE/Twist-ER cells was extracted using RNeasy Plus Mini kit (Qiagen) and reverse transcribed with Superscript II (Invitrogen). The resulting cDNA was used for 25 cycles of PCR with primers listed in the Text S1. Then samples were subjected to 10%TBE gel electrophoresis (Bio-Rad), stained with SYBR Safe DNA Gel Stain (Invitrogen), scanned (Typhoon, GE Healthcare) and quantified (ImageQuant 5.2). Total RNA from FNA samples was extracted using RNeasy Plus Micro kit (Qiagen). The resulting cDNAs were used for qPCR analysis using iQ Syber-Green Supermix (BioRad) in triplicates. qPCR and data collection were performed on iCycler (BioRad). Primer sequences used to amplify cDNAs and the detailed description of quantification analysis are listed in the Text S1. Human tissue selection and FNA biopsy procedure Lumpectomy and mastectomy specimens that arrive to grossing rooms at AECOM hospitals Montefiore and Weiler for pathological examination were used for tissue collection. The specimens were sectioned as usual at 0.5 or 1.0 cm intervals to locate and visualize the lesion of interest. Four to 5 FNA aspiration biopsies (passes) were performed on grossly visible lesions using 25 gauge needles. When an FNA needle is inserted into a malignant tumor it preferentially collects loose tumor cells, as can be noted on FNA obtained smears in Figure 5 and Figure S5. A small number of other cell types may also be present, most commonly inflammatory cells and macrophages. The aspirated material was collected in the cryo-vials, and to assess the adequacy of the sample, a small portion of the aspirated material was taken out of the vial, smeared on a glass slide, air-dried and stained by standard Diff-Quick protocol. The adequacy of the sample was determined by cytopathologic microscopic examination of the smears. Only samples composed of 95% of either benign or malignant epithelial cells were used in the study. Standard cytopathologic criteria such as cell size, nuclear/cytoplasmic ratio, nuclear contours, cell crowding and cohesiveness of the cells were the major criteria for classification into benign or malignant category. Samples containing a mixture of malignant and benign cells, necrotic cell debris, or more than 5% of inflammatory or stromal cells as determined by cytopathologic microscopic examination were discarded. FNA biopsy samples were immediately snap frozen in liquid nitrogen and stored frozen for RNA isolation followed by a qPCR analysis. Specimens were collected without patient identifiers following protocols approved by the Montefiore Medical Center Institutional Review Board. Cell migration assays Matrigel overlay assay was performed as previously described [42]. 10 5 cells were mixed with 3.5 mg/ml matrigel and polymerized in a drop on top of the matrigel-covered coverslip. Images of migrating cells at 0, 8 hr, 19 hrs, 24 hrs time points were obtained on a Nikon Eclipse TE200 using a 106 DIC objective. Cell migration assay was performed as previously described [43,72]. Cells were incubated with CMFDA (Invitrogen) for 10 minutes and seeded overnight. Labeled and unlabeled cells were seeded at a 1:20 ratio. In 24 hrs, cells were placed on an environment-controlled Nikon TE2000 microscope (Nikon Instruments; Melville, NY) and were imaged every 10-minutes for 12 hrs. Image sequences were analyzed with Bitplane Imaris software (Zurich, Switzerland) using the built-in 'Spots' function. 12-hour tracks were generated using the 'Brownian Motion' algorithm. Permeability assay HMLE/pBP-EGFP, HMLE/pBP-Twist-EGFP and HMLE/ pBP-Twist/ESRP1-EGFP cells were seeded at confluence on polycarbonate transwell membrane inserts (3.0 mm pore size; Falcon 353492) and cultured for 3 d. 70 kD of Texas red-dextran (Invitrogen) was added to the top chamber at 2 mg/ml, and its movement into the bottom chamber was monitored over 4 hrs by spectrophotometer. Figure S4 Coherence between NCI-60 array data and EMT RNA-Seq dataset increases for highly changed EMT-associated SE events. A bar graph demonstrating the fraction of coherent events between EMT RNA-seq and a panel of NCI-60 breast cancer cell lines [41] as a function of RNA-seq |DY| cut-offs. The number of events called significant at the corresponding RNA-seq |DY| cut-offs and exon array FDR,0.25 [41] is depicted above each column. S1 Skipped and Mutually Exclusive alternative splicing events with FDR,0.05 and |DY|$0.03. Column 1 marks the type of event: SE-skipped exon, MXE-mutually exclusive exon. Column 2 -Gene symbol. Column 3 -Ensembl Gene ID. Column 4 -the chromosome number where the gene is located. Column 5 -DNA strand on which the gene is encoded. Column 6 -exon coordinates of the flanking and alternative exons: for SE events -,upstream flanking exon., ,alternative exon., ,downstream flanking exon./,upstream flanking exon., ,downstream flanking exon.; for MXE events -,upstream flanking exon., ,alternative exon 1., ,downstream flanking exon./,upstream flanking exon.,,alternative exon 2., ,downstream flanking exon.. Column 7 -the Y of the alternative event in the epithelial (pre-EMT) sample. Column 8 -the Y of the alternative event in the mesenchymal (post-EMT) sample. Column 9 -DY = Y(mes)2Y(epi). Column 10 -FDR. . Column 1 (Exon) indicates a reference exon of the intronic element analyzed. Column 2 (Element) indicates intronic element analyzed: I5-59 sequence of the intron, I3-39 sequence of the intron. Column 3 (p-value) -the hypergeometric p-value of the 5mer frequency in foreground over that of the background. Column 4 (FDR) -B-H multiple comparison FDR of the p-value. Column 5 (background rate)the density of the 5mer in the background (set of unchanged SE events). Column 6 (expected frequency) -the expected count of the 5mer in the foreground given the background rate. Column 7 (foreground rate) -the density of the 5mer in the foreground (set of changed SE events). Column 8 (foreground frequency) -the count of the 5mer in the foreground. Column 8 (word) demonstrates the sequence of the 5mer. . Column 1 (Exon) indicates a reference exon of the intronic element analyzed. Column 2 (Element) indicates intronic element analyzed: I5-59 sequence of the intron, I3-39 sequence of the intron. Column 3 (p-value) -the hypergeometric p-value of the 5mer frequency in foreground over that of the background. Column 4 (FDR) -B-H multiple comparison FDR of the p-value. Column 5 (background rate)the density of the 5mer in the background (set of unchanged SE events). Column 6 (expected frequency) -the expected count of the 5mer in the foreground given the background rate. Column 7 (foreground rate) -the density of the 5mer in the foreground (set of changed SE events). Column 8 (foreground frequency) -the count of the 5mer in the foreground. Column 8 (word) demonstrates the sequence of the 5mer. (XLS) Video S1 HMLE/pBP cells migrate efficiently in a monolayer. HMLE/pBP cells were labeled with a cellular dye CMFDA and seeded in a confluent monolayer mixed 1:20 with unlabelled cells. Cells were imaged for 12 hours with 10 min intervals. Cell tracks were generated using semi-automated cell tracking and represent single cell tracks over 12 hours. Centroids of fluorescent cells are marked by grey circles. (MOV) Video S2 Monolayer migration of HMLE/pBP-Twist cells is cell-contact inhibited. HMLE/pBP-Twist cells were labeled with a cellular dye CMFDA and seeded in a confluent monolayer mixed 1:20 with unlabelled cells. Cells were imaged for 12 hours with 10 min intervals. Cell tracks were generated using semi-automated cell tracking and represent single cell tracks over 12 hours. Centroids of fluorescent cells are marked by grey circles. (MOV) Video S3 HMLE/pBP-Twist/ESRP1-EGFP cells demonstrate significant locomotion. HMLE/pBP-Twist/ESRP1-EGFP cells were labeled with a cellular dye CMFDA and seeded in a confluent monolayer mixed 1:20 with unlabelled cells. Cells were imaged for 12 hours with 10 min intervals. Cell tracks were generated using semi-automated cell tracking and represent single cell tracks over 12 hours. Centroids of fluorescent cells are marked by grey circles. (MOV)
Challenges and Strategies in Developing an Enzymatic Wearable Sweat Glucose Biosensor as a Practical Point-Of-Care Monitoring Tool for Type II Diabetes Recently, several studies have been conducted on wearable biosensors. Despite being skin-adhesive and mountable diagnostic devices, flexible biosensor patches cannot truly be considered wearable biosensors if they need to be connected to external instruments/processors to provide meaningful data/readings. A realistic and usable wearable biosensor should be self-contained, with a fully integrated device framework carefully designed and configured to provide reliable and intelligent diagnostics. There are several major challenges to achieving continuous sweat monitoring in real time for the systematic and effective management of type II diabetes (e.g., prevention, screening, monitoring, and treatment) through wearable sweat glucose biosensors. Consequently, further in-depth research regarding the exact interrelationship between active or passive sweat glucose and blood glucose is required to assess the applicability of wearable glucose biosensors in functional health monitoring. This review provides some useful insights that can enable effective critical studies of these unresolved issues. In this review, we first classify wearable glucose biosensors based on their signal transduction, their respective challenges, and the advanced strategies required to overcome them. Subsequently, the challenges and limitations of enzymatic and non-enzymatic wearable glucose biosensors are discussed and compared. Ten basic criteria to be considered and fulfilled in the development of a suitable, workable, and wearable sweat-based glucose biosensor are listed, based on scientific reports from the last five years. We conclude with our outlook for the controllable, well-defined, and non-invasive monitoring of epidermal glucose for maximum diagnostic potential in the effective management of type II diabetes. Introduction According to the 10 November 2021 report of the WHO, diabetes mellitus was the cause of 1.5 million deaths in 2019. Nearly 50% of all deaths were attributable to the presence of high blood sugar before the age of 70. The WHO estimates that diabetes was the ninth leading cause of death in 2019 (WHO, 2021) [1] ("WHO. Diabetes". http://www.who.int/mediacentre/factsheets/fs312/en/, accessed on 10 November 2021). Diabetes can be avoided or treated with effective and consistent physical exercise, a low carbohydrate/sugar diet, proper medication, and, most importantly, continuous monitoring and timely care to avoid complications. Despite this, mortality rates related to diabetes are rapidly increasing. A medical diagnosis of blood glucose or HbA1c monitoring is not only costly but also causes pain and stress in addition to the risk of potential infection. Non-invasive wearable glucose biosensors for sweat, saliva, and tears have been developed to overcome these obstacles. Owing to their simplicity, sweat-based biosensors have received relatively high attention. Over the years, sweat-related physiological work has been well-founded. Olarte et al. (2013) used a digital nose to detect glucose in human sweat [2]. However, off-body sweat glucose biosensing is tedious, as sweat samples must be collected, stored, and tested remotely in a laboratory by trained professionals using costly and bulky equipment. In addition, difficulties in obtaining adequate sweat sample volumes (≥10 µL), evaporation, and chemical degradation between sweat collection and testing have limited the sensitivity and reliability of these off-body tests. Consequently, the real-time, accurate, and routine monitoring of people's diabetic health status has been compromised. Thus, wearable sweat glucose biosensors that enable the collection and analysis of sweat in real time, and an autonomous, integrated wearable system enabling reliable, continuous, and painless monitoring of glucose with minimal intervention by specialists have become a requirement. It took only about five years to explore wearable sweat-based glucose-biosensing devices [3,4] (Figure 1a,b). For realistic point-of-care monitoring, it is necessary to fully understand sweat dynamics and explore wearable glucose biosensors. These should be stable and reliable for a long time (≥24 h) and be capable of assessing sweat glucose in real time, thereby accurately indicating the actual physiological condition of the human body. In addition, for continuous-glucose-monitoring purposes, certain easily accessible sweat-collection sites containing eccrine glands (sweat glands known to secrete sweat-containing glucose) such as the forehead, forearm, and back should be selected [5]. Glucose is one of the major sweat-secreted metabolites that are at a much lower concentration in sweat than in the blood. Specifically, glucose concentrations range from 2 to 40 × 10 −3 M in the blood, whereas in sweat they are in concentrations of only 0.01-1.11 × 10 −3 M [6]. Low concentrations (~100 times dilution) of glucose in sweat require highly sensitive systems, particularly in the case of hypoglycemia or skin glucose residue contamination, which cause strikingly high biases in sweat glucose. A majority of the wearable sweat glucose biosensors recently reported are either colorimetric or electrochemical dependent. As a point-of-care diagnostic tool for better management of type II diabetes, colorimetric-based glucose biosensors generally have lower operating costs compared to electrochemical-based glucose biosensors (a fully integrated device is expensive). Accurate, real-time, and reliable concentration readings of sweat glucose (after auto corrections for sweat rate, pH, and temperature) can only be obtained using electrochemical biosensors. Both methods are suitable for long-term continuous glucose monitoring, but the disposable biosensing patch needs to be replaced after each measurement, especially for colorimetric biosensors. The cost of the materials used and the bio/chemical reagents used to develop a wearable glucose biosensor are, therefore, crucial. To be a viable diagnostic tool, the system/patch should not be priced higher than any of the current commercially available personal blood glucose meters; otherwise, despite being non-invasive, the technology cannot be considered truly beneficial for the effective management of type II diabetes. In this review, we also address the limitations of each strategy and the advanced approaches identified to solve the respective problems. Wearable Sweat Biosensors Based on Colorimetric A powerless operating system and simple RGB color signal interpretation have established colorimetric biosensing as an eminent method for sweat glucose analysis. However, colorimetric biosensing works only for enzymatic glucose biosensors, as non-enzymatic glucose biosensors require electron transmission (glucose oxidation) and, therefore, only work in electrochemical sensing. Digital-imaging techniques are used in colorimetric chemical tests to extract quantitative information. Despite impressive progress, some inadequacies have been found, particularly about accurate, simultaneous, and multianalyte analyses across physiologically relevant concentration ranges. These deficiencies are due to poor design. Continuous sweat streams through the enzyme reaction zones can cause chemical diffusion and, therefore, color leaching. Time-dependent and spatially non-uniform color responses limit the precision and reliability of the measurement [7]. Errors associated with irregular color production on filter paper are also caused by uncontrollable capillary wrapping through the paper and concentrated edge effects [8]. Isolated color reference markers may be mounted in the peripheral regions of the wearable sensing system; however, they cannot ensure accurate color analysis, owing to inconsistent lighting conditions [9]. Furthermore, in the case of colorimetric sweat glucose analysis, it is necessary to avoid any overlap between sweat-sampling processes and glucose-biosensing processes, as colorimetric reagents and reaction products could cause severe skin burns (e.g., o-Dianisidine). Several successful attempts towards practical demonstrations of microfluidic platforms for accurate and multiplex colorimetric analysis of glucose, lactate, pH, chloride, and sweat in a wide variety of ambient lighting conditions have been reported [4]. (Figure 1b). To summarize, white and black reference color markers were implemented to reduce reliance on the lighting conditions of realistic significance (daylight, shadow, and various light sources). A white dot was mounted in the middle of the unit, and four black crosses were symmetrically distributed near the center to calculate values for 100% and 0% of the RGB coordinates, respectively. Precise analyses of sweat rate and sweat volume in the microfluidic serpentine channel are performed by referring to the black crosses even when the images are rotated or translated. The digital color data (in RGB format percentage) were then converted to real analyte concentrations after image correction using a calibration chart. A thin, skin-mountable microfluidic interface platform with multimodal sweat glucose-biosensing capabilities was developed to overcome the problems caused by filter paper. A collection of optimized capillary-bursting valves in the device was used to direct the sweat flow to individual micro-reservoirs for separate, multiple test reactions within a single device, preventing cross-contamination or flow-through mixing effects. In the liquid phase, the color design was carried out to resolve any uneven color development and allow spatially uniform adjustments for accurate homogeneous color measurement [4] (Figure 1b). Multiple color reference markers are printed directly on the surface of the biosensing platforms, adjacent to each of the micro-reservoirs, to enable real-time quantitative analysis under various lighting conditions [10] (Figure 1c). Chemical assays must be designed to accurately measure the concentration of sweat biomarkers across physiologically relevant ranges. The color checker can be used to correct images taken with ambient lighting to achieve accurate results similar to those obtained under controlled lighting conditions. Wearable Sweat Biosensors Based on Electrochemical Recent developments in skin-integrated electronic platforms support the analysis of sweat glucose and wireless data-transmission hardware using electrochemical biosensors. While these types of wearable biosensors allow continuous monitoring of sweat glucose levels, they usually require batteries in addition to other supporting electronics (wireless PCBs) and subsystems that can dominate the form factor [11]. The electrochemical sweat glucose biosensors could be manufactured in a two-electrode configuration on flexible elastomer substrates, a common strategy for low current consumption in electrochemical biosensors. In some situations, the airtight cover layer used, which is typically waterproof to avoid sweat evaporation, results in an irritable wearing experience [12], especially when the patch is intended for long-term skin adhesion to enable continuous sweat glucose monitoring. To overcome this issue and increase functional applicability, the patch model should be either disposable or reusable, that is, it should be attachable multiple times [5] (Figure 1d). Wearable Biosensors for Non-Invasive Control of Sweat Glucose Two major types of wearable glucose biosensors, enzymatic and non-enzymatic sensors, have been identified to date. Enzymatic biosensors allow for both colorimetric and electrochemical biosensing. Non-enzymatic sensors, on the other hand, are only suitable for electrochemical sensing because glucose detection requires redox reactions. On account of the selectivity and stability of commercial biosensor strips, all commercially available blood glucose meters are enzymatic glucose biosensors [13]. Non-enzymatic systems, on the other hand, neglect electrode selectivity, slow glucose oxidation kinetics in many unmodified electrodes, electrode fouling by biological fluid sample constituents, and a limited number of physiologically relevant pH systems [14]. Therefore, non-enzymatic glucose sensors would not meet strict requirements to be commercially viable, thus ruling out a wearable non-enzymatic glucose sensor. In other words, wearable enzymatic sweat glucose biosensors have exciting potential as a point-of-care diagnostic tool for better management of type II diabetes. Enzymatic Biosensors The use of enzymes, however, results in several critical problems about wearable glucose biosensors. First, both wearable electrochemical and optical glucose biosensors based on the enzyme glucose oxidase (GOx) are highly sensitive to changes in the sweat environment, such as temperature, pH, and ionic strength, which cannot be regulated within in situ conditions [15]. Prolonged sweat pH is typically lower (pH 4.5-6) than neutral physiological pH (pH 7), owing to metabolic lactic acid production during muscle movement; furthermore, prolonged sweat pH varies between humans. After denaturation, enzymes cannot recover their characteristics owing to acidic pH, high temperature, or high ionic strength. Second, in addition to enzyme degradation over time, another concern is poor stability. Even in the case of highly glycated and therefore stable GOx, its catalytic activity will decrease slowly with time, affecting its shelf life and long-term wearable monitoring capability (increase in skin temperature during exercise, normal body temperature: 36.5-37.5 • C) [5]. Third, on exposure to frequent mechanical friction and skin deformation, delamination of the immobilized enzyme layer from the biosensor interface may occur. These limitations decrease the long-term storage stability and the reliable continuous usability of the sensors in real-time monitoring. Fourth, the immobilization of enzymes on the transducer surface involves processes such as covalent attachment, cross-linking polymerization, or sol-gel entrapment on the surface of the working electrode. This not only suppresses the activity of the enzyme but also immobilizes electrode reagents, thereby slowing down the transmission of electrons and reducing the sensitivity of detection. Fifth, commercially available enzymes have a high cost and are generally bio-sourced agents suitable for in vitro analyses only, and therefore may not be appropriate for wearable biosensing. Sixth, wearable sweat-based biosensors, with a closed air gap to prevent sweat evaporation, have only a two-phase solid-liquid interface whereby oxygen is transmitted through the liquid phase (excreted sweat) with a low diffusion coefficient as compared to oxygen in the air. Consequently, the supply of oxygen available in the enzyme reaction zone is limited by Fick's law; this restricts the upper limit of linearity, sensitivity, and accuracy detection. To overcome substantial temperature changes, continuous monitoring of sweat glucose is ideally carried out indoors at a constant room temperature. The body temperature at 37 • C would remain stable at rest by using iontophoresis to generate sweat, making it suitable for optimal enzyme activity to detect glucose. Integration of a flexible microfluidic system with a wearable glucose biosensor will allow the instantaneous flow of old acidic sweat through the biosensing electrode and immediately replenish it with new, fresh sweat at a near-neutral pH, thereby overcoming the problem of acidic sweat pH due to the excretion of lactic acid. Due to the short contact time between acidic sweat and immobilized GOx enzyme, the enzyme would not be degraded unless it was constantly immersed in acidic sweat in the absence of a microfluidic system. Long-term enzyme stability in contact with human skin can be achieved by designing a disposable type of sweat glucose biosensor with the frugal chemical sensing strip replaced for each measurement, but not for the whole integrated system. It is necessary to design a flexible type of glucosebiosensing electrode to overcome the problem of enzyme delamination, but the electrode itself must be truly stretchable. In addition, a thin, soft, and flexible outer cover layer to avoid sweat evaporation will help protect the enzyme and prevent delamination from the biosensing interface. Incorporating advanced nanomaterials (e.g., MOF, polymer brush) into the biosensing electrode interface and biological/chemical modification of enzymes [16] could significantly improve the rate of electron transfer and sensitivity to glucose sensing, thus resolving enzyme immobilization problems. The multilayer biosensor model should hold the immobilized enzyme at a minimum distance from the skin to avoid near-direct contact with the wet skin surface and to undermine the potential for skin irritation or any adverse effects due to the use of bio-sourced enzyme grade in a wearable glucose biosensor [17] (Figure 2a). To address the problem of low oxygen supply, Lei and co-workers have developed a new wearable sweat-based biosensor with an open-air hole and a three-phase reaction interface, thus providing an ample and constant supply of oxygen [18] (Figure 2b). To realize this concept, the outer, isolating cover layer is made of soft silicone rubber with air holes deliberately introduced. This allows oxygen to diffuse easily from the air to the active biosensing interface. As the oxygen diffusion coefficient in the air phase (2.0 × 10 −1 cm 2 s −1 ) is much higher than that in the sweat solution phase (2.1 × 10 −5 cm 2 s −1 ) [19], the oxygen level in the tri-phase reaction zone remains relatively constant and can be higher by several orders of magnitude. An oxygen-rich enzyme electrode ensures rapid oxidation of glucose and the formation of an abundant amount of H 2 O 2 , which in turn strengthens the signal, allowing more accurate and consistent testing, a wider linear detection range, and ultra-high glucose sensitivity. Non-Enzymatic Sensors The development of a wearable sensor for non-enzymatic electrochemical detection of sweat glucose could be an alternative, given the challenges and limitations faced by enzymatic biosensors. The detection of glucose using non-enzymatic electrochemical sensors has been studied for a long time [14]. Electrocatalytic materials capable of oxidizing glucose quickly and effectively (including bulk metal, metal oxide, alloys, metal nanomaterials, and carbon nanocomposites) are used in these sensors [20]. One of the most critical drawbacks for wearable sensing is that electrocatalytic glucose oxidation must be carried out in an alkaline environment (pH > 11) for appropriate selectivity, sensitivity, and reproducibility. Conversely, a drastic reduction in or even complete loss of electrochemical reactions is observed if the analysis is made at the physiological pH of sweat (pH 7.2-7.3). Glucose detection under non-alkaline physiological conditions shows poor reproducibility results due to severe electrode-surface poisoning as competitive chloride adsorption (Cl − ) and phosphates (PO 4 3− ) passivate the sensing interface, resulting in unsatisfactory reliability of sensing. Non-enzymatic transition-metal sensors have a linear range inappropriate for the diagnosis of blood glucose (2-20 mM) or sweat glucose (sweat glucose level: 0.2-0.6 mM). Non-enzymatic sensors using metals or alloys, such as Pt and Pt-Pb alloys, are known to be expensive, toxic, and have poor selectivity [20]. Zhu and co-workers have recently developed a non-enzymatic wearable sensor for electrochemical analysis of sweat glucose under alkaline conditions. This is to resolve the restricted working pH environment, specifically at alkaline pH for non-enzymatic wearable sweat glucose sensors [11]. A new strategy was used in their research by applying multi-potential steps to a non-enzymatic gold sensing electrode. A high potential negative step (−2.0 V) was first used to pre-treat the sweat sample, producing a localized alkaline condition on the electrode surface. Following this, a moderate sweat glucose-detection potential (0.2 V) was applied under the alkaline condition as generated in the first step. In the final step, a positive potential (1.0 V) was applied to the electrode surface for cleaning and regeneration purposes. Due to its high electrocatalytic activity towards glucose oxidation, Au was used as the working electrode in an alkaline condition. However, the multi-potential step approach causing pH change was only located within the diffusion layer near the wearable electrode-sensing interface, much smaller in volume than the bulk solution. Therefore, during the glucose-sensing process, it did not cause skin irritation [11]. Alternatively, for sweat glucose detection under neutral conditions, Toi et al. (2019) have developed a new electrochemical patch design consisting of a wrinkled, stretchable, and nanohybrid fiber with a high overall surface area [21]. This can provide a high electrocatalytic effect, stretchability, and durability under mechanical deformation. Synergetic effects between the Au nanowrinkles and the rGO supporting matrix, with oxygen-containing functional groups, improved the electrocatalytic behavior of the sensor by inducing abundant hydroxide anions on the Au nanowrinkled surface, facilitating dehydrogenation in the glucose oxidation reaction. This innovative technique can be used to detect glucose under neutral pH conditions and is therefore ideal for wearable non-enzymatic sweat glucose sensing. In addition, the stretchable, non-enzymatic glucose sensor has a stretching capability of up to 30% and remains stable after 10,000 stretch cycles at a strain of 30%. This provides high mechanical durability under repeated mechanical deformation cycles [21] (Figure 2c). the final step, a positive potential (1.0 V) was applied to the electrode surface for cleaning and regeneration purposes. Due to its high electrocatalytic activity towards glucose oxidation, Au was used as the working electrode in an alkaline condition. However, the multi-potential step approach causing pH change was only located within the diffusion layer near the wearable electrode-sensing interface, much smaller in volume than the bulk solution. Therefore, during the glucose-sensing process, it did not cause skin irritation [11]. Alternatively, for sweat glucose detection under neutral conditions, Toi et al. (2019) have developed a new electrochemical patch design consisting of a wrinkled, stretchable, and nanohybrid fiber with a high overall surface area [21]. This can provide a high electrocatalytic effect, stretchability, and durability under mechanical deformation. Synergetic effects between the Au nanowrinkles and the rGO supporting matrix, with oxygen-containing functional groups, improved the electrocatalytic behavior of the sensor by inducing abundant hydroxide anions on the Au nanowrinkled surface, facilitating dehydrogenation in the glucose oxidation reaction. This innovative technique can be used to detect glucose under neutral pH conditions and is therefore ideal for wearable non-enzymatic sweat glucose sensing. In addition, the stretchable, non-enzymatic glucose sensor has a stretching capability of up to 30% and remains stable after 10,000 stretch cycles at a strain of 30%. This provides high mechanical durability under repeated mechanical deformation cycles [21] (Figure 2c). For wearable non-enzymatic glucose sensing, to reduce interference from the extracted sweat sample matrix and subdue sensor selectivity problems, the electrode could be covered with a Nafion layer followed by another Kel-F membrane layer. The Nafion is a cation-exchange polymer membrane that can selectively exclude anions from the electrode surface [22]. A Kel-F membrane is a type of fluorocarbon material that can repel charged molecules (e.g., amino acids, acids, urea, ammonium) [23]. On the other hand, transition-metal oxides, such as NiO or Co 3 O 4 , can replace Pt and Pt-Pb alloys and show high sensitivity. Mixed transition metal tungstate has recently gained a great deal of attention due to its remarkable properties, occurring due to different valence states of the W atom. Furthermore, tungsten material has the advantages of being simple to synthesize, inexpensive, low in toxicity, and highly stable [20]. Ten Specific Requirements for the Development of a Desired Wearable Sweat-Based Glucose Biosensor for Successful Type II Diabetes Management: Challenges and Solutions Nonetheless, recent studies on sweat glucose analysis indicate that while the sweat can be collected non-invasively on the skin's surface for functional and real-life applications, many primary analytical challenges remain to be addressed before a successful commercial operation. This includes: (i) limited basic knowledge of sweat biology with uncertainties in real-time sweat glucose concentration due to varying sweat flow rates; (ii) biofluid association such as blood and interstitial fluid; (iii) biosensor reproducibility, individual biosensing consistency, and regular biosensing reliability across large population studies. To achieve a desirable, workable, and practical wearable glucose biosensor for effective management of type II diabetes, ten basic criteria must be met, ranging from device manufacturing, proof-of-concept, marketing, and real-world application. The challenges, limitations, and advanced strategies used to effectively address all problems encountered in the achievement of each criterion are discussed. A Fully Integrated and Autonomous Platform A self-contained wearable sweat glucose biosensor must be a fully integrated and autonomous system [24]. For non-invasive, real-time, and continuous monitoring of sweat glucose levels, the device should consist of a flexible photovoltaic/biofuel cell for energy harvesting and conversion, flexible and secure rechargeable batteries (e.g., Zn-MnO 2 ) as intermediate energy storage devices, an iontophoretic system for autonomous on-demand sweat extraction (at least 100 nL/min/cm 2 ) with a well-controlled sweat generation rate, a microfluidic system for sweat collection and temporary storage, an electrochemical/colorimetric biosensing platform with functional electrodes to convert concentrations of sweat glucose into electrical/optical signals, a flexible printed circuit board (PCB) to drive the iontophoretic process and to allow in situ data analysis as a controlling module (e.g., processing, calibration, and easy read-out signal transmission), and preferably a digital screen for direct and real-time tracking of glucose, removing transmitting systems and external reading devices. In addition, the electronic display module should ideally provide a stable display without requiring continuous power, ultra-low power consumption, a wide-viewing angle, high visibility, and good contrast. The sweat glucose level is indicated as "low", "mid", or "high", with its respective corrected values depending on the sweat glucose concentration detected with pH or temperature changes taken into consideration during in situ measurement. This fully integrated system offers operational convenience, safety, and miniaturization that are highly desirable for wearable and portable applications. A miniaturized sensor design allows, normally, an infinitesimal amount of sweat for accurate sweat glucose analysis. Without additional data storage and download capabilities, however, this topology prevents data storage or dissemination to health care providers and is, therefore, less attractive for integrated health-monitoring applications [25] (Figure 3a). To overcome this limitation, most of the currently reported wearable glucose biosensors rely primarily on wireless data transmission (e.g., near-field communication (NFC), Wi-Fi, Bluetooth, and Zigbee) and external display components (e.g., mobile phones, computers) for real-time data analysis and feedback for practical self-monitoring applications. Continuous monitoring of sweat glucose for 24 h or longer is important. It is certainly commercially relevant to have this capability, with the only downside being the restricted miniaturization of the whole system. Although blood temperature and acidity remain homeostatic, these parameters in sweat can differ with varied generation conditions, and this can weaken enzyme-detection reliability. The optimized sweat glucose biosensors should therefore be coupled with pH and temperature sensors in the integrated sensing system/patch for more precise calibration [10]. Enzyme activity in the biosensing layers can change with pH. During muscle movements, metabolic secretion of lactic acid in sweat decreases the pH level to 4-5. The pH value varies between subjects and is affected by dietary routines [26]. Consequently, a change in glucose biosensor signal can result due to a change in sweat pH instead of fluctuations in secreted glucose. This problem can be solved by parallel sensing of glucose and pH. Next, biosensor signals can also be influenced by ambient and skin temperature. The increase in temperature leads the glucose biosensor to overestimate the amount of sweat glucose, as higher temperatures increase enzymatic activity. Temperature and pH sensors integrated with glucose biosensors enhance the accuracy of glucose monitoring by providing real-time corrections of the measured sweat glucose levels based on precalibrated data. It is also important to test several sweat analytes at the same time as sweat glucose, as this can help decide if changes in biosensor signals are due to real changes in analyte concentrations or sweat rate effects. Potassium levels in sweat, for example, are quite invariant concerning sweat rates and normal physiological changes in the body. Therefore, if there is a change in the signals of sodium, lactate, or glucose while the potassium signal is constant, the other changes in the sensor can be confidently triggered by a real physiological event [4]. In other words, it is essential to integrate sweat-rate sensors (K + , Na + ) to unravel the rate-dependent sweat-glucose-secretion modulation. By contrast, a single analyte biosensor could generate misleading information as a change in signal could be due to sweating stoppage, detachment of the biosensor from the skin surface, or even biosensor failure. Thus, multiple sensors can be used for accurate detection. High reproducibility with minor variations makes it possible to calibrate one-point biosensors before use. One-point calibration is required to account for baseline differences in biosensor signals, but a universal calibration curve ensures that only one preliminary measurement is required to minimize the complexity of the device setup and prepared before use on the body. Such wearable glucose biosensors should be fitted with a single calibration curve so that there is no need for extensive testing of the sensing characteristics of each unit individually before use [27]. This is standard practice for electrochemical biosensors in particular because even commercial pH meters need pre-calibration in generic buffer solutions. A fully integrated and autonomous device should consist of a multilayer stack of at least three subsystems for wearable colorimetric sweat glucose biosensing: (i) an excellent skin-compatible adhesive layer (including wet skin) with microchannel openings identifying sweat-collection and sweat-rate-monitoring areas; (ii) a sealed array of thin, elastic microfluidic channels and multiple reservoirs filled with color-responding materials for different analyte concentrations; and (iii) near-field electronics (NFC) to communicate with existing wireless devices [28] (Figure 3b). The eccrine glands themselves will provide pressure to route sweat through a network of microfluidic channels. One-way control valves are used to communicate with the individual, separately located reservoirs containing color-responsive materials in a manner that avoids contamination and crosstalk. The main advantage of a fully integrated, wearable, colorimetric sweat glucose biosensor is the lack of a requirement for a power supply to run the whole system and analyze glucose. One of the limitations, however, is that it can only be used to analyze sweat glucose produced by physical exercise and not by iontophoresis or stretchable heaters. Therefore, this is not suitable for people who are sedentary. A Wearable Material That Is Soft, Flexible, and Stretchable When designing a glucose biosensor wearable for on-body monitoring, stretch-ability and structural stability (active biosensing electrode surface) are two significant competing issues requiring careful design and optimization. Although thin geometries and low-modulus elastomers are crucial to mechanical compatibility with the body, these characteristics often result in significant deformation or even fracturing by external pressure, either during fabrication or in conditions associated with natural skin deformation. Thus, the wearable sweat glucose biosensors should be fabricated with flexible, thin, and stretchable hybrid features so that they conform to the body surface and withstand physical strain [29] (Figure 3c). These unique features make it easy to wear and stable even when skin contact is physically disturbed. Mechanical friction and deformation of devices on soft human skin will delaminate the enzyme from the glucose biosensor. Materials and devices straining can complicate the signal performance of the sensor and degrade the reliability of the biosensor patch. Therefore, during normal activities of the wearer, it is important to mitigate movement-induced artifacts to avoid signal interference. The stretchability of wearable biosensors can be accomplished either by using internally stretchable electrodes or by using extrinsically stretchable structures. Extrinsic structural development can be achieved through the introduction of island-bridge, serpentine, wavy, or helix structures while intrinsic development can be achieved through the use of liquid metal nanomaterials, noble metal nanowires (Ag nanowires), gold nanomaterials (gold fiber), and carbon nanotubes (CNT) that may converge into percolation networks [29]. In addition, the wearable biosensor should be able to detect multi-directional strain to emulate the strain environment and ensure mechanical stability during repeated stretching cycles [21]. Multidimensional strain biosensors are difficult to fabricate as they usually show high coupled electrical conductivity changes in the main axis of the principal strain and perpendicular direction due to the Poisson's ratio [30]. Under a large strain condition, this issue becomes more serious. Stretchable biosensors of sweat glucose could be fabricated using conventional techniques of evaporation and photolithography [29]. The disadvantage of these techniques is that they require a high vacuum, which is time-consuming and costly. The benefits of using the filtration system, on the other hand, are low cost, low temperature, simplicity, and easy process [29]. Specific deformation tests should be performed to determine the mechanical stability of the biosensor under mechanical stresses anticipated during operation on the body. The wearable device should be bent, twisted, and stretched when applied to the arm to test its resistance to such stresses and the presence of potential cracks or breaks in the electrode surface [4]. If these strains do not cause any apparent damage to the structure, it reflects the conformity and flexibility of the whole integrated biosensing system. The wearable biosensor should ideally be mechanically stable after 1000-10,000 repetitive stretching/releasing cycles at 50% strain [31]. Real-Time Sweat Stimulation and Extraction Sweat is an emerging, non-invasive biofluid that can be readily accessed through physical exercises or deliberately generated on demand using low electrical current through thermal heating or chemical induction called iontophoresis. Significant advantages and numerous recent reports have focused on on-body sweat analysis [32]. Successful implementation of a reliable, non-invasive, and affordable monitoring system for the effective management of type II diabetes has many primary challenges and limitations related to real-time sweat stimulation and continuous and long-term sweat glucose monitoring. These challenges include: (1) irregular or low sweat-generation rates during exercise; (2) being unable to perform regular sweat-generating exercise due to health problems such as severe obesity, heart problems, or unfitness due to age (senior citizen), pregnancy, illness, or disability; (3) iontophoresis limitations that are only promising for one-time measurement; and (4) an uncomfortable and unbreathable wearing experience induced by the stretchable wearable heater. Meeting these challenges requires substantial improvement in sweat induction, storage, and transportation. Real-Time Sweat Stimulation and Extraction Sweat is an emerging, non-invasive biofluid that can be readily accessed through physical exercises or deliberately generated on demand using low electrical current through thermal heating or chemical induction called iontophoresis. Significant advantages and numerous recent reports have focused on on-body sweat analysis [32]. Successful implementation of a reliable, non-invasive, and affordable monitoring system for the effective management of type II diabetes has many primary challenges and limitations related to real-time sweat stimulation and continuous and long-term sweat glucose monitoring. These challenges include: (1) irregular or low sweat-generation rates during exercise; (2) being unable to perform regular sweat-generating exercise due to health problems such as severe obesity, heart problems, or unfitness due to age (senior citizen), pregnancy illness, or disability; (3) iontophoresis limitations that are only promising for one-time measurement; and (4) an uncomfortable and unbreathable wearing experience induced by the stretchable wearable heater. Meeting these challenges requires substantial improvement in sweat induction, storage, and transportation. Physical Exercise For effective treatment of type II diabetes, glucose levels for diabetic patients are well controlled by periodic insulin injections before or during meals or routine prescription of drugs. A continuous, reliable, non-invasive attempt to monitor sweat glucose levels is therefore very important to assess the appropriateness of care provided and to minimize morbidity and mortality related to complications caused by diabetes. One of the options could be physical exercise. However, diabetic patients require sweat glucose continuous monitoring six times a day (fasting before breakfast, after breakfast, lunch, after tea, dinner, and before sleep), and showering six times a day due to an uncomfortable feeling Physical Exercise For effective treatment of type II diabetes, glucose levels for diabetic patients are well controlled by periodic insulin injections before or during meals or routine prescription of drugs. A continuous, reliable, non-invasive attempt to monitor sweat glucose levels is therefore very important to assess the appropriateness of care provided and to minimize morbidity and mortality related to complications caused by diabetes. One of the options could be physical exercise. However, diabetic patients require sweat glucose continuous monitoring six times a day (fasting before breakfast, after breakfast, lunch, after tea, dinner, and before sleep), and showering six times a day due to an uncomfortable feeling after excessive sweating is not practical from a private behavioral standpoint. Even a healthy, non-diabetic subject is also unable to endure six-fold physical exercise to generate sufficient sweat for several continuous sweat glucose-monitoring sessions per day. No diabetic patient is willing to do physical exercise every day to generate sweat for non-invasive sweat glucose monitoring and maintain it for many months or years to effectively manage and control diabetes. Intensive exercise to produce sweat glucose can also cause hypoglycemia in diabetic patients, and this is one of the major concerns of diabetic patients. Although a non-invasive approach to glucose control, the sweat generation process by physical exercise is not completely functional. Diabetic patients with severe obesity or pregnant women may reach their maximum oxygen intake earlier, resulting in a tendency to sweat earlier but showing lower sweat rates compared to normal diabetic patients. On the other hand, as perspiration begins, glucose levels in sweat gradually decrease with time as glucose is rapidly flushed out of the sweat gland. The decreased levels of sweat glucose are attributed to the dilution effect caused by an increase in sweat rate, which is observed visually as exercise continues. As exercise occurs, apart from the rise in the sweat rate causing glucose dilution, the increase in skin temperature will also affect the activity of the enzyme (GOx), which must be accounted for to avoid overestimation of the actual concentration of glucose. However, different body parts show different sweat rates, which, despite similar trends, results in different concentrations of glucose at any particular time due to the dilution effect. To decipher sweat glucose more holistically, comparing glucose variations across body locations, environmental parameters, and reactions to glucose intake would be more helpful. In short, real-time, continuous, and accurate sweat glucose monitoring achieved through exercise sweat is highly unlikely to be an ideal approach, particularly for patients who have diabetes in addition to other serious health problems such as obesity and cardiovascular disease. Additionally, this approach is unlikely to be ideal for pregnant women, the elderly, and the disabled. Iontophoresis Iontophoresis is an established technique used to polarize ions and polar molecules at each electrode by applying a mild electrical current throughout the skin and is commonly used in clinics for diagnostic and therapeutic purposes, especially for sedentary people. A sweat-inducing agent such as pilocarpine (stimulating agonist) is iontophorized into the skin (sweat glands) to generate sweat in the iontophoresis process [3]. Hence, at any convenient location on the body, sweat can be produced without invasion and on-demand. Embedded iontophoresis capabilities should be included in a well-integrated system for local sweat extraction. Using a pair of ring-shaped electrodes (WR Medical Electronics Co., area: 4.3 cm 2 ), with the sweat-rate sensor (Q-sweat, WR Medical Electronics Co.) installed on the positive electrode, the stimulated region is sealed to extract active sweat through an iontophoresis strategy. Through modulating the length of the iontophoresis applied as well as the agonist concentration (e.g., acetylcholine, methacholine, and pilocarpine), the effective sweat-secretion period could be fine-tuned from a few minutes to tens of minutes. In addition, by optimizing the sweat-stimulating concentration of the drug in the customdeveloped hydrogels and carefully designing the iontophoresis electrodes, consistent secretion rates exceeding 100 nL/min/cm 2 could be achieved. Sufficient sweat could be extracted for reliable in situ glucose analysis without causing skin damage or discomfort in the subjects. [12] (Figure 4a). With these merits, iontophoresis is critical in addressing the challenges of increasingly aging populations and diabetic patients with health problems and limited medical services. The composition of exercise sweat is important for applications in athletics and physiology, while sweat generated through chemical iontophoresis is more useful for medical diagnosis. The analysis of such stimulated sweat secretion is significantly more attractive for non-invasive glucose-monitoring applications on account of the short sampling times and the ability to perform sedentary measurements, which is more convenient compared to physical exercise. Iontophoresis may avoid the risks of exerciseinduced hypoglycemia in diabetic patients, making it ideal for large-scale population monitoring. Since the body undergoes dynamic physiological changes during exercise and local sweat stimulation through iontophoresis leaves the overall body largely in its resting state, the resulting sweat composition may differ and needs further investigation. However, iontophoresis may be better suited for one-time use rather than continuous monitoring, as the repeated application of iontophoretic current at the same position may be harmful to the underlying skin. As high iontophoresis current can cause skin irritation, all newly developed iontophoretic wearable systems should be designed to reduce the current density and iontophoresis time. The pH of the epidermis will change due to ionic accumulation at the sweat-sampling site due to repeated sweat induction of iontophoresis. Using buffered gel coatings (e.g., cryogel, agarose gel) may aid in preventing burns caused by adverse pH effects, protecting the epidermis, and overcoming this issue [33] (Figure 4b). As the buffered gel inevitably adsorbs into the surface of the epidermis after a while, this incident can raise another safety concern. In short, more research should be focused on the iontophoresis approach to continuous sweat glucose monitoring for effective management of type II diabetes so that it can be used practically and usefully for repeated sweat generation. Wearable Stretchable Heater for Non-Invasive Continuous Sweat Extraction Based on its superior electrical conductivity at high aspect ratios, the Ag nanowire network can be used as a highly transparent and stretchable heater for wearable electronics applications [34]. The unique structure of the Ag nanowire network/PDMS as reported by Ko and his co-workers could generate Joule heating with a rapid thermal response, increased electrical stability to withstand repeated mechanical stress (200%), and a small variance in resistance [35] (Figure 4c). For applications requiring non-uniform or sitespecific heating, spatial temperature distribution could be controlled by manipulating the spatial current density through the patterning of nanowire percolation networks using direct laser ablation. The soft and thin properties of a wearable stretchable heater allow sweat to be generated on the curvilinear and irregular surfaces of the human body (e.g., chest, forehead, temple, forearm). Long-term sweat generation using wearable stretchable heaters is achieved by coating gold (Au) (a non-toxic, oxidation-resistant material) on the exposed layer of Ag nanowires [34]. The device's stable, long-term sweat generation for continuous sweat glucose monitoring supports enhanced electrical stability under sweating conditions following Au's galvanic coating process. However, if the stretchable wearable heater is constructed as a soft, thin, stretchable solid-layer patch, the wearable stretchable heater will be uncomfortable and unbreathable to wear. To overcome this problem, superhydrophobic materials are incorporated to drive sweat into the microfluidic system spontaneously and leave the contact skin area dry and comfortable, even with multiple sweat-generation cycles. This may be a practical technique to generate sweat continuously for non-invasive glucose biosensing, but it needs a power supply. No research has been reported using this approach for sweat glucose monitoring in situ. The Collection and Detection of Sweat Components at Rest via Hydrogel The majority of wearable sweat sensors have focused on exercise, chemically (iontophoresis), or thermally driven sweat production, but they are unable to swiftly draw tiny volumes of resting sweat into the device, restricting real-time analysis. In the past, bulky instrumentation or 24 h collection periods for single-point analyte measurement were necessary for at-rest thermoregulatory sweat-rate monitoring in clinical settings [36]. There is still a need for convenient, wearable technologies for continuous resting-sweat monitor- The Collection and Detection of Sweat Components at Rest via Hydrogel The majority of wearable sweat sensors have focused on exercise, chemically (iontophoresis), or thermally driven sweat production, but they are unable to swiftly draw tiny volumes of resting sweat into the device, restricting real-time analysis. In the past, bulky instrumentation or 24 h collection periods for single-point analyte measurement were necessary for at-rest thermoregulatory sweat-rate monitoring in clinical settings [36]. There is still a need for convenient, wearable technologies for continuous resting-sweat monitoring. This is a major hurdle that must be addressed for sweat to become a viable means of health monitoring across all activities, whether active or sedentary, and across all user groups, whether young or elderly, healthy or sick. Recently, Nyein et al. (2021) developed a wearable hydrogel patch for continuous analysis of thermoregulatory sweat at rest for pH, Cl − , and levodopa monitoring [37]. The microfluidic device was designed to enable effective small-volume collection and analysis of resting sweat. The researchers demonstrated a thin hydrogel patch functionality for dynamic sweat analysis in the context of everyday activities, stressful events, hypoglycemia-induced sweating, and Parkinson's disease. In another study by Sempionatto et al. (2021), a new rapid and reliable approach that combines a simple hydrogel touch-based fingertip sweat electrochemical sensor with a new algorithm that addresses personal variations toward the accurate estimate of blood glucose concentrations was presented [38]. This new painless and simple glucose self-testing protocol leverages the fast sweat rate on the fingertip for rapid assays of natural perspiration without any sweat stimulation, along with the personalized sweat-response-to-blood concentration translation. In another earlier study, Nagamine et al. (2019) also reported on a hydrogel-based touch-sensor pad for the non-invasive extraction and detection of sweat components [39]. The sensor was composed of electrochemical biosensing working and reference electrodes fully covered with an agarose hydrogel, including a sweat-extraction solution, Dulbecco's Phosphate Buffer Saline (DPBS). The sweat components can be easily and continuously extracted from human skin contact with the agarose hydrogel at rest, followed by in situ L-lactate detection. In another study that utilized the fingertip as a site for simultaneous biomarker data sampling and user identification, Prof. Emaminejad (2020) reported the use of a thin hydrogel micropatch that exploits sweat to noninvasively access biomarker information at rest [40]. The fingertip was determined as the optimal site for natural perspiration sampling, which was subsequently used to demonstrate the successful tracking of the dosage level and metabolic pattern of caffeine intake among different subjects. All the above-reported studies utilized hydrogel to collect the resting sweat. However, the main issue is that utilizing hydrogel alone as a sweat-extracting platform at rest can dilute sweat compositions and hence put a challenge on the detection limit and sensitivity of electrochemical sensors. Instead of using only a thin hydrogel layer, the researchers elegantly occupied the remaining dead space with the rigid filler. Eventually, these hydrogel patches offer continuous, autonomous monitoring of body physiology at rest by enabling sweat analysis suitable for sedentary, habitual, and daily activities. Wearable Self-Powered System Currently, most wearable batteries are rigid, substantially bulky, and require external charging or regular replacement. In addition, there are intrinsic safety concerns with commonly used batteries (e.g., lithium batteries) [41]. The electrochemical biosensing system should therefore perhaps be combined with a self-powered energy harvesting, conversion, and storage system for routine continuous sweat glucose monitoring. A Wearable Solar Cell and Biofuel Cell Clean and renewable energy from the surrounding environment (e.g., solar energy, mechanical movements and frictions, biofluids, etc.) could be used to create a selfpowered system to support the effective operation of wearable glucose biosensors. Compared to flammable, organic electrolyte-based batteries, the self-powered solar energy system eliminates safety concerns at higher cell temperatures, particularly with relatively large outdoor heat generation. reported an integrated self-powered wearable glucose biosensor that can be charged up to 6.0 V in 1 h of outdoor sunlight (approximately 1.5 air mass) [25]. The unit reached a cruising time of up to 8 h with a cut-off potential of 3 V with the solar energy contained in the batteries. Nevertheless, for everyday use in continuous sweat glucose monitoring, a 1 h charge in outdoor sunlight may not be practical. Biofuel cells, on the other hand, could be an alternative solution to the problem. Nevertheless, most of the wearable biofuel cells produced are based on epidermal platforms that use sweat lactate as biofuel [28]. This is rational since lactate and glucose are excreted from the sweat gland at the same time, and their concentration depends on the individual based on their eating habits [42]. A stable supply of biofuel is essential for the efficient production of electricity. This problem can be overcome by combining a powerful sweat generator with a sweat-rate sensor and microfluidics to ensure that effective sweat flows continuously. On the other hand, by designing biocatalysts and enzyme-electronic interfaces with lower resistance, the main stability problem can be mitigated. Finally, it is possible to improve power generation by integrating enzymatic cascade reactions and also connecting the biofuel cell systems themselves with biocomputing logic-gate systems for reversible on/off switching of power output [43]. A Wireless Power Transmitter In addition to fuel cells, WiTricity (wireless electricity) could provide a powerful solution to the wearable electrochemical biosensor energy supply problem [44]. WiTricity is a new advanced technology that demonstrates that wireless energy can be delivered to a wearable biosensor over a moderate distance to monitor sweat glucose autonomously and in real time. The WiTricity theory is based on magnetic resonance (coupled-mode theory), strongly coupled. First, compact WiTricity resonators are designed, followed by the construction and evaluation of a working prototype of a WiTricity-driven wearable biosensor. An energy transfer efficiency of around 80% over a distance of 15 cm will allow the integrated system to function properly. The wearable sweat glucose biosensor can be powered wirelessly by inductive transmitter coupling antennas. A certain misalignment between the transmitter and the receiver will have little effect on the power transfer and will affect the efficiency of the power transmission. Lateral and angular misalignments of the receiver antennas could be solved using a configuration of multi-transmitter antennas. With the use of in-phase transmitter antennas, the active range over which the wearable biosensor could function properly could be enhanced. Wearable biosensors can work at a range of 4-10 cm continuously. The wireless power-management system could reduce the wearable glucose biosensor's weight, eliminate safety problems, and enable continuous monitoring of sweat glucose over extended periods. A Small Flexible/Wearable Aqueous Rechargeable Battery For a good self-sustaining system, attention should be paid not only to energy harvesting and conversion but also to power storage. The latest advances in research on stretchable aqueous batteries, especially aqueous Li-ion batteries and zinc-based batteries, are important for the realization of wearable devices of the next decade, such as sensors, medical devices, and electronic skin [45]. An aqueous rechargeable zinc-manganese oxide (Zn-MnO 2 ) battery with unique features such as improved safety, lightweight, ecofriendliness, high output voltage, and high capacity has become one of the most promising alternatives to replacing conventional lithium batteries or any commercially available batteries using flammable organic electrolytes [25]. The Zn-MnO 2 batteries could be used as intermediate energy storage systems, and the use of aqueous electrolytes could reduce battery safety issues, which are critical for wearable devices. Such remarkable energy-storage capacity and mechanical strength could provide versatile and portable applications for the fabricated flexible battery. Flexible, Microfluidic Sweat-Sampling System For a wearable sweat glucose biosensor without integration with a microfluidic system, there are four main challenges. The challenges are (1) susceptibility to skin (bio)marker contamination; (2) mixing and carryover between new and old sweat; (3) irreproducible sample transport over the surface of the detector; and (4) lack of sweat evaporation and volume control. As a result, for sweat collection, capture, storage, and analysis, a small, flexible, skin-compatible microfluidic device with microfluidic channel networks, inlet/outlet ports, micro-reservoirs, and electrochemical/colorimetric biosensors is required [46]. In the past, fabric (cotton threads) was used as a medium for the manufacture of low-cost and low-volume microfluidic devices [8] (Figure 5a). Nowadays, advanced strategies such as laser ablation enable quick material patterning on small scales and are useful for the gravure of microfluidic channels in flexible plastic substrates [47]. The integration of microfluidics into wearable sweat glucose biosensors overcomes many challenges that decrease data integrity. First, when sweat reaches the microfluidic channels, it is separated from the surface of the skin, stopping the skin from constantly leaking chemicals into the sweat. To reduce the effects of mixing and carryover, efficient sweat sampling often requires cleaning the area of the skin on which the sweat is collected, as residual glucose and exogenous contaminants can affect the measurement. Second, channels can be built so that sweat can be continuously replenished by steering old sweat away from the biosensor and allowing new sweat to flow in. This guarantees readings as close to real time as possible rather than generating rolling averages of sweat glucose concentrations due to sweat-dilution problems caused by varying sweat rates. Third, microfluidics enables consistent measurements at low sweat rates. Fourth, microfluidics can encase sweat while traveling through the biosensing electrodes, minimizing evaporation, preventing environmental or external contamination, and avoiding direct contact with the human skin of the sensing components [4]. In addition, the microfluidic channel connected to the electrochemical biosensors should be made moderately hydrophobic by short-term heat treatment before bonding to the electrochemical sensors. This is to prevent any calibration solution from flowing back to the microfluidic channel or contaminating it. Xiao et al. (2019) have recently developed a wearable colorimetric sensor based on soft, microfluidic chips that can be placed directly on the skin to detect sweat glucose [48] (Figure 5b). The device consisted of five microfluidic channels linked to the detection of microchambers. Multiple sweat controls and inlets are necessary because they are intended for efficient sweat collection. The microchannels redirected the sweat excreted from the epidermis to the microchambers, and each of them was combined with a control valve to avoid the risk of microchamber backflow of chemical reagents. In short, microfluidics with multiple sweat-uptake channels and a safety check valve are the most important criteria for designing a functional system, especially for in situ sweat glucose colorimetric biosensing. Soft microfluidic devices with shorter sweat sampling, filling times, and advanced detection methods are needed to provide timely and rich analytical data. Nonetheless, this must be theoretically modeled and experimentally implemented to achieve short sweatsampling times, quick sweat flow rates, and effective transportation over the detector surface [17]. The increased sweat-sampling level could effectively address several existing challenges of accurate epidermal electrochemical glucose biosensing by continuously providing sufficient sweat to the detector surface for robust biosensing while rapidly removing initial contaminating glucose residues on the skin surface or in sweat pores. Nevertheless, due to the proximity of openings, a greater number of inlets would facilitate mechanical instability of the adhesive sheet, which could lead to sweat leakage. As a result, the maximum number of inlet configurations that allow rapid sweat sampling without significantly increasing sweat volume (small sweat volumes at the microliter level) should be carefully engineered, resulting in a short surface-detector filling time. The conditions between tests should be preserved since temperature and sweat-extraction conditions will change the sweat rate and therefore the filling time of the detector surface. A Porous Interface to Enhance Sweat Glucose Biosensing Sensitivity Porous electrodes (e.g., Al 2 O 3 , MXene) could enhance the loading of GOx enzymes used for glucose biosensing and exhibit a higher sensitivity than planar electrodes, as porous electrodes usually have a larger electrochemically active surface available for electrochemical reaction with reactants, particularly with a small amount of sweat glucose [18]. In addition, porous electrodes can reduce potential drifts at redox peaks due to differences in the concentration of electrolytes and charging effects compared to planar electrodes [49] ( Figure 5c). Robustly cross-linked enzymes on the porous metal structure enable stronger enzyme immobilization, prevent delamination of the enzyme membranes, and improve the biosensor's reliability under mechanical friction and deformation [50] (Figure 5d). High-Throughput Roll-To-Roll (R2R) Device Fabrication Technique for Large Population Studies One of the main obstacles to large-scale population studies needed to interpret sweat is the achievement of high-throughput sweat glucose biosensors with uniform, reliable performance (with high stability) and high yields that are also necessary for commercial viability. Roll-to-roll (R2R) rotary screen printing is a promising method for producing highperformance and cost-effective flexible wearable electronics [51] (Figure 6a). These R2Rcompatible methods are advantageous over traditional, multi-stage lithography and etching processes since components can be mass-produced at high speeds on large substrates using automated systems with minimal human involvement [52] (Figure 6b). The R2R fabrication of integrated sweat-rate sensors and microfluidics is a crucial application-oriented technological advancement to help decode the sweat glucose-secretion rate-dependent modulation. To date, the iontophoretic sweat glucose dynamics and sweat-to-blood glucose correlations have not been adequately studied for the feasibility of sweat-based diabetes management. When considering iontophoresis sweat glucose and sweat rate alone with simple models, it would be difficult to predict instant blood glucose levels accurately. This does not preclude the possibility of a more complex relationship between sweat components and blood glucose, including multiparameter and lag time dependence [51]. Expanding basic knowledge of sweat gland physiology and carrying out large-scale correlation studies with multiplexed sensing to characterize sweat-rate dependencies could help unravel the secret. In short, the design of robust and flexible R2R gravure printed electrodes is a crucial translation step in making it possible to produce large-scale, low-cost wearable glucose biosensors for personalized health-monitoring applications. Correlation with Blood Glucose Concentrations of sweat glucose have a time lag, and the concentration range varies from that of blood glucose due to diffusion barriers in human physiology. A physiological basis for knowing the sweat glucose-secretion process could provide a physiological basis for more sophisticated models that account for or normalize sweat rates, leading to a stronger relationship with blood glucose [53]. While the lack of a uniform sweat-toblood glucose correlation across all subjects makes it difficult to identify universal sweat thresholds for diagnosis or management of type II diabetes, more thorough longitudinal sweat glucose studies should result in individual-specific sweat-to-blood relationships for personalized diabetes management because the correlation for each individual is different. Standard techniques such as Clarke Error Grid Analysis, used for every new device's quality evaluation for human blood or plasma glucose detection, should also be used for human sweat glucose detection [54]. Big-data analytics should be the basis of precision and proactive medicine. For effective management of type II diabetes lifestyle disease, advanced big-data analytics and Artificial Intelligence (AI)-based diagnostic and predictive tools (e.g., Machine Learning (ML) or Deep Learning (DL)) to elucidate the correlation between sweat and blood glucose would be useful [55]. Nevertheless, the most important consideration of basal sweat and blood glucose changes is sweat assessment to diagnose prediabetes or diabetes and to predict the initiation of glucose events (e.g., hypoglycemia) to commence early and effective treatment. [8,49,50]. High-Throughput Roll-To-Roll (R2R) Device Fabrication Technique for Large Population Studies One of the main obstacles to large-scale population studies needed to interpret sweat is the achievement of high-throughput sweat glucose biosensors with uniform, reliable performance (with high stability) and high yields that are also necessary for commercial viability. Roll-to-roll (R2R) rotary screen printing is a promising method for producing high-performance and cost-effective flexible wearable electronics [51] (Figure 6a). These R2R-compatible methods are advantageous over traditional, multi-stage lithography and etching processes since components can be mass-produced at high speeds on large substrates using automated systems with minimal human involvement [52] (Figure 6b). The R2R fabrication of integrated sweat-rate sensors and microfluidics is a crucial applicationoriented technological advancement to help decode the sweat glucose-secretion rate-dependent modulation. To date, the iontophoretic sweat glucose dynamics and sweat-toblood glucose correlations have not been adequately studied for the feasibility of sweatbased diabetes management. When considering iontophoresis sweat glucose and sweat rate alone with simple models, it would be difficult to predict instant blood glucose levels accurately. This does not preclude the possibility of a more complex relationship between sweat components and blood glucose, including multiparameter and lag time dependence [51]. Expanding basic knowledge of sweat gland physiology and carrying out large-scale correlation studies with multiplexed sensing to characterize sweat-rate dependencies Figure 5. (a) A wearable microfluidic cotton thread/paper-based device linked to a smartphone for sweat glucose sensing. (b) A wearable colorimetric sensor based on microfluidic chips that detect sweat glucose easily and quickly. (c) Porous enzymatic membrane for highly stable nanotextured glucose sweat sensors to ensure reliable, non-invasive health monitoring. (d) A nanoporous electrochemical gold capillary microfluidic integrated sensor that is fully stretchable for continuous glucose monitoring. The figures are reprinted with permission from ref [8,49,50]. could help unravel the secret. In short, the design of robust and flexible R2R gravure printed electrodes is a crucial translation step in making it possible to produce large-scale, low-cost wearable glucose biosensors for personalized health-monitoring applications. Figure 6. (a) Regional and correlative sweat analysis using microfluidic high-throughput sensing patches to decode sweat. (b) Roll-to-roll electrochemical sensors for wearable and medical devices. The figures are reprinted with permission from ref [51,52]. Correlation with Blood Glucose Concentrations of sweat glucose have a time lag, and the concentration range varies from that of blood glucose due to diffusion barriers in human physiology. A physiological basis for knowing the sweat glucose-secretion process could provide a physiological basis for more sophisticated models that account for or normalize sweat rates, leading to a Figure 6. (a) Regional and correlative sweat analysis using microfluidic high-throughput sensing patches to decode sweat. (b) Roll-to-roll electrochemical sensors for wearable and medical devices. The figures are reprinted with permission from ref [51,52]. Glucose-Triggered Insulin and Therapeutic Drug Delivery Close-Loop System for Precision Theranostics Maintaining a normal glucose level is critical to health management, as high and low glucose levels may indicate a risk of metabolic disorders such as diabetes or hypoglycemic shock, especially after intense exercise or an overdose of glycemic control drugs (e.g., metformin, aspirin, chlorpropamide, Tylenol). The current medical treatment of diabetes patients is focused on the self-monitoring of blood glucose concentration and the subcutaneous self-administration of insulin to maintain normal glycemic levels. This common treatment procedure, however, is intrusive, stressful, and painful, and thus subject to insufficient glycemic control, which can lead to additional complications in health. Significant efforts have been devoted to the design of "closed-loop" therapeutic systems capable of supplying insulin [56] or metformin [57] in response to increased levels of sweat glucose. A fully integrated device for high-fidelity sweat glucose measurement and feedback-controlled insulin or drug delivery enables effective, pain-free management of blood glucose concentration. Ideally, this closed-loop therapeutic unit can inject insulin/metformin at hyperglycemic peaks and glucagon at hypoglycemic pits for effective blood glucose control. The closed-loop therapeutic system could be either a device for transdermal drug delivery (e.g., the temperature-responsive phase change of nanoparticles embedded with microneedles) [57] (Figure 7a) or pH-sensitive nanocarriers [56] (Figure 7b). Since drug delivery through the skin can bypass the digestive system, insulin and metformin transdermal delivery requires a lower dose of drugs than oral delivery and thus prevents gastrointestinal side effects [58]. The integrated wearable tremor sensor detects simulated tremors under low blood glucose conditions that could be induced in hypoglycemic states to inject glucagon. In addition to the temperature-responsive phase-change nanoparticles, particular attention was also paid to glucose-responsive materials based on GOx's biocatalytic reaction [59,60]. GOx catalyzes glucose conversion into gluconic acid with a decrease in local pH. For the development of stimulus-responsive therapeutic-agentdelivery systems, pH-sensitive nanocarriers based on polymeric nanogels, nanocapsules, and mesoporous silica could be combined with GOx [61]. Nevertheless, the development of wearable bio-responsive nanomachines integrating continuous movement and glucose biosensing with activated insulin/drug release for theranostic precision remains a medical challenge unmet. of supplying insulin [56] or metformin [57] in response to increased levels of sweat glucose. A fully integrated device for high-fidelity sweat glucose measurement and feedback-controlled insulin or drug delivery enables effective, pain-free management of blood glucose concentration. Ideally, this closed-loop therapeutic unit can inject insulin/metformin at hyperglycemic peaks and glucagon at hypoglycemic pits for effective blood glucose control. The closed-loop therapeutic system could be either a device for transdermal drug delivery (e.g., the temperature-responsive phase change of nanoparticles embedded with microneedles) [57] (Figure 7a) or pH-sensitive nanocarriers [56] (Figure 7b). Since drug delivery through the skin can bypass the digestive system, insulin and metformin transdermal delivery requires a lower dose of drugs than oral delivery and thus prevents gastrointestinal side effects [58]. The integrated wearable tremor sensor detects simulated tremors under low blood glucose conditions that could be induced in hypoglycemic states to inject glucagon. In addition to the temperature-responsive phase-change nanoparticles, particular attention was also paid to glucose-responsive materials based on GOx's biocatalytic reaction [59,60]. GOx catalyzes glucose conversion into gluconic acid with a decrease in local pH. For the development of stimulus-responsive therapeutic-agent-delivery systems, pH-sensitive nanocarriers based on polymeric nanogels, nanocapsules, and mesoporous silica could be combined with GOx [61]. Nevertheless, the development of wearable bio-responsive nanomachines integrating continuous movement and glucose biosensing with activated insulin/drug release for theranostic precision remains a medical challenge unmet. [56,57]. Security and Privacy Issues for Personalized Medicine in Wireless Wearable Biosensor Networks Since most wearable devices and their data and signal transmission to an external mobile device or gadget are wireless in nature, security and privacy are major concerns [62]. The sensitivity also increases due to the direct involvement of humans. Whether the data collected from patients or individuals are obtained with or without the consent of the Security and Privacy Issues for Personalized Medicine in Wireless Wearable Biosensor Networks Since most wearable devices and their data and signal transmission to an external mobile device or gadget are wireless in nature, security and privacy are major concerns [62]. The sensitivity also increases due to the direct involvement of humans. Whether the data collected from patients or individuals are obtained with or without the consent of the subjects, due to system requirements, misuse, or privacy concerns, may restrict end-users from taking full advantage of the integrated and autonomous wearable glucose biosensors. Furthermore, although the main intention is noble, some end-users may not consider such devices safe for everyday use. In the worst scenario, serious social concerns may also arise due to the fear that such devices may be used by government agencies or other private organizations to monitor and track individuals. Early detection of such a breach would reduce safety risks and ensure the privacy, accuracy, and integrity of health care data generated by wearable sweat glucose biosensors. Conclusions and Future Perspectives In the field of predictive diagnosis and routine monitoring of diabetes mellitus, it is crucial to develop wearable glucose biosensors with superior reproducibility and sensing stability for accurate, non-invasive glucose monitoring. Robust and stable components of biosensing need to be developed to enable long-term use. This is because the long-term stability and uniformity of biosensors make the system applicable to humans with minimum recalibrations. The incorporation of microfluidics and multiplexed sensing to improve data integrity must be combined with a higher-level study of the physiological relevance of sweat glucose analysis using in situ large-scale correlation studies. Theranostic interventions with autonomous closed-loop insulin or therapeutic-drug delivery systems could improve glycemic control. Parallel efforts are also needed to ensure secure data-handling and to adopt high-throughput manufacturing methods so that wearable sweat glucose biosensors can be used more widely. The physiological importance of sweat glucose has yet to be investigated to assess its utility in realistic type II diabetes control and management applications. As for prospects, apart from roll-to-roll printing, the use of 3D printing to produce a wearable sweat glucose biosensor will solve reproducibility and long-term stability problems. Three-dimensional printing enables precise and intricate designs to be produced on small scales that are useful for ultra-low sweat volumes. Three-dimensional-printed glucose biosensors can perform better when gathering glucose signals by incorporating pH, temperature, and sweat-rate sensors, making them better than conventional electrode methods, and the produced biosensors can be more flexible as a personalized medicine to meet a variety of end-user biological needs. By combining 3D-printed biosensors with electronic components on wearable devices, large-scale use would be possible. The manufacturers could use the same 3D printer nozzles used to print the glucose biosensors to print other components (e.g., microfluidic, iontophoresis, stretchable metal nanowire heaters, PCB) of the wearable devices to reduce the risk of manufacturing defects in the assembly and manufacture of fully integrated devices. Wearable sweat glucose monitoring systems could be incorporated using the Internet of Things (IoT) to improve the quality of services for diabetes care. The feasibility of an IoT-based approach for non-invasive and continuous glucose monitoring could be further explored. An IoT-based system architecture could be built from a wearable sweat glucose biosensor unit to a back-end system for displaying real-time sweat glucose, skin temperature, local pH, contextual information (i.e., sweat rate), and blood glucose correlation in graphical and human-readable forms for end-users (patients and doctors). On the other hand, the RF communication protocol could be tailored to suit the sweat glucose monitoring system, allowing a high energy efficiency level. The IoT-based system provides many advanced gateway-level services, such as a smart theranostic system by triggering insulin or therapeutic drug injections via the epidermis in abnormal situations (i.e., hypoglycemic or hyperglycemic). The ability to simultaneously measure both glucose and insulin would enhance glucose regulation and better management of type II diabetes by providing an improved insulin tolerance estimate, reducing medication variability, and optimizing hypoglycemic protection. Insulin, however, cannot be present in sweat, but it can be measured in saliva [63] and tears [64]. Direct detection of free insulin by wearable affinity biosensors, either in saliva or tears, should be able to overcome the challenges posed by the size of the molecule, the low insulin concentration, and the selectivity against interferers in these biological mediums. In short, efficient, realistic, and continuous monitoring of sweat glucose and saliva/tear insulin using a wearable non-invasive point-of-care system offers significant potential for reducing the morbidity and mortality rate caused by diabetes mellitus and preventing epidemic diabetes.
Earnings Quality and Accruals over Company´S Life Cycle . Research background: The increasing number of bankruptcies and the growing risk of financial distress highlight the need for quality financial statements, conservative accounting or increase the need for quality tools to detect the occurrence of earnings management. However, the life cycle of a company affects financial performance and key aspects of earnings management, which examined in the international context only to a small extent. Purpose of the article: The paper examines the impact of the life cycle and country- specific factor on the value of discretionary accruals in the tourism sector in the Visegrad countries, which are among the most vulnerable sectors in the coming economic crisis. Methods: This study uses the method of two-way analysis of variance with interaction, while also testing the assumptions of the model by normality tests, homogeneity test and post hoc tests (Scheffé and Tukey methods). Findings & Value added: Earnings quality changes during the life cycle of a business; whereas in the first stages (introduction, growth) and in decline they use downward earnings management. On the contrary, mature and shake-out companies have enough positive earnings before taxes, which is a prerequisite for tax profit optimization. The level of earnings management in tourism varies significantly at different stages of the life cycle, but also in different countries. The results imply that the qualitative variable corporate life cycle in interaction with the country is an important explanatory variable, the implementation of which can improve the explanatory power of earnings management models in Central European developing countries. Introduction As during the economic crisis of 2009 and 2020, the global economy is facing an important challenge. Based on the OECD Economic Outlook, September 2020, the global economy is projected to decline by 4.5% for 2020 and the euro area by 7.9%. In the next period (2021), a modest economic recovery is expected, with growth of 5% worldwide and 5.1% within the euro area. However, these predictions mean a better outlook for 2020, the estimate for 2021 is worse by 0.2% globally and even 1.2% on a European scale compared to the March Outlook [1]. Euler Hermes [2] adds that despite economic growth, the main insolvency boom will still occur at the end of 2020 and in the first half of 2021. The global insolvency index should reach a record growth of 35% in 2021 compared to the latest data from 2019. The most dramatic growth is expected in the US (57%), Brazil (45%) or Spain (41%). A high percentage of bankruptcies is expected by Euler Hermes even in small pro-export oriented Central and Eastern Europe; a record 49% in Lithuania, 38% in Slovakia and 34% in the Czech Republic. Poland and Hungary should be among the less affected countries with 24% (Poland) and 20% (Hungary) of insolvencies. The last mentioned countries, with the exception of Hungary, show much higher increases in insolvencies compared to 2019 and also compared to the crisis year of 2009. The increasing risk of default draws attention to publicly available financial statements (balance sheet, profit and loss statement, cash flow statement), which are the main source of information for stakeholders. The relevance or reliability of these statements may be reduced due to the manipulation of accounting numbers that are used by companies in adverse economic conditions to a greater extent. Inaccurate financial statements tend to misinterpret a company's financial performance and may ultimately result in a cyclical increase in insolvency growth [3]. In contrast, high-quality accounting reporting and high-quality earnings reflect a long-term trend, reflecting a conservative understanding of accounting rules, minimizing one-off items, and cash flow covering profit [4]. Lo [5] directly combines high earnings management with low profit quality. In contrast, Dechow, et al. [6] argue that the definition of quality earnings is ambiguous and there are many candidates for a proxy of quality earnings. Frequently used residuals from accrual models (such as the Jones model, the modified Jones model, Teoh, et al. Model, etc.) have a low R 2 and a large number of omitted variables that reduce the reliability of the results. Stakeholders can detect accrual earnings management through two calculation principles: time-series or cross-section approach. The former is only suitable if the surveyed company has a sufficient history (sufficient accounting information). On the contrary, the cross-section approach is more strongly represented in the professional literature as evidenced by a review by [7]. The point of contention for the application of the cross-section approach is the condition of homogeneous estimation samples. Undertakings operating in the same sector, which are expected to have similar financial characteristics, are commonly used as samples. The inhomogeneity of the normal accrual generating process within an industry has been demonstrated by several studies [8][9][10]. Businesses in one sector share similar characteristics given by macroeconomic variables and the phase of the industrial life cycle, but do not take into account the uniqueness of companies in terms of business life cycle. Empirical studies [11][12][13] show that profit manipulation is different at different stages of the company's life cycle, or the estimation of discretionary accruals based on the life cycle. based estimating samples instead of industry based is more accurate. In this context, the paper examines the impact of the life cycle and country specific factor on the value of discretionary accruals in the tourism sector in the Visegrad countries, which are among the most vulnerable sectors in the coming economic crisis. Net sample of financial indicators of 3650 companies from the last available accounting period 2018 was used, which is the most relevant to the economic situation in 2020. The study adopted a life cycle approach from Dickinson [14] and discretionary accruals are quantified based on a modified Jones model. In order to identify the impact of the life cycle phase on the quality of earnings (degree of earnings management), a two-way ANOVA was used, where the second factor was the country's affiliation. Life cycle theory and corporate financial performance and accruals The life cycle of a company is similar to the life cycle of products, with the difference that the company is a mix of different products and other aspects, i.e. defining the life stages of a company on the same scale as a product would be insufficient. Business life cycle theories vary depending on the cause; we distinguish theories focused on the managerial concept and organization of the company and theories focused on financial indicators such as dividend policy, managerial accounting system, takeover activity or business valuation. The latter created a comprehensive business life cycle model, consisting of 6 phases: Start-up, Young Growth, High Growth, Mature Growth, Mature Stable and Decline. Similar to [14], this model describes the behaviour of investment, financial and operational cash flow, in addition to focusing on the description of capital structure, debt capacity and debt value, tax benefits or dividend policy. From the point of view of the capital structure, Damodaran [15] states that external financing (bank loans, venture capital or trade credits) is used more by younger companies because resources such as shares or marketable debt are unavailable to them. Internal resources are low or negative in the early stages, but may also exceed financial needs after the maturity stage. Debt is gradually gaining in value with the growth of sales and profit, which is reflected in the growth of tax benefits. The stabilization of profit and its predictability in the graduation phase helps to reduce the expected costs of bankruptcy. Dividend policy copies the development of earnings and free cash flow to equity; any type of dividend (regular or special) is paid only from the period of mature growth. Damodaran [15] mentions, among other things, that the general model of a company's life cycle differs depending on the nature of the company examined. Businesses in the technical sectors have a significantly steeper life cycle curve with a short introduction phase, rapid growth and an equally rapid fall from peak profitability. Compressed life cycle means that rapid growth allows the business to move between large in less time. However, its investment opportunities will reach the highest point much faster and begin to decline. The intensive life cycle causes the investment cash flow to become free cash flow to equity, which is paid out to investors as special dividends or used for buybacks. These companies are much more vulnerable to new competitors in the market with minimal barriers to entry. On the contrary, non-technical enterprises (service enterprises) have a longer life cycle conditioned by a higher connection of customers to the product. The decline phase, like growth, is long-lasting; these businesses have a higher chance of a turn of life cycle. Dickinson [14] developed a life cycle model based on cash flow. Unlike Damodaran [15] mentioned above or Jovanovic [16] and others she primarily examined cash flow, which should include all the mentioned influences. The company's life cycle includes five phases: Introduction, Growth, Maturity, Shake-out and Decline. Cash flow is divided into operational, investment and financial, a unique combination of these three components defines each of the phases of the life cycle. In the first phase (Introduction), the company is largely financed through bank debt or, to a lesser extent, by shareholders [17]. Higher external financing associated with negative working capital causes and, with significant investment activity, negative operating cash flow, positive investment cash flow and financial inflow. The company faces high corporate risk (credit and operational). Negative working capital is associated with the growth of SHS Web of Conferences 92, 0 (2021) Globalization and its Socio-Economic Consequences 2020 2043 https://doi.org/10.1051/shsconf/20219202043 accruals from current assets (mainly inventories), which causes a significant change in cash sales in the first phase of the life cycle [18]. On the contrary, the value of depreciation has little effect on the total accrual, because so far the company only accumulates fixed assets, whose depreciation is low. Growing companies, like start-ups, have significant investment activities (negative cash flow) and financing by debt is deepening. Operating cash flow is positive because the growth of inventories is accompanied by the growth of trade credits. Both of these components of the total accrual are growing. The total cash flow reaches a positive value. At this stage, a break-even point occurs, the company becomes profitable, it is able to obtain long-term debt in addition to short-term debt. Dickinson [14] adds that both fixed assets and depreciation are the highest, which increases the value of total accruals. In accordance with the industrial life cycle, at the stage of maturity there is a maximum of companies that have sufficient knowledge of operational activities and are able to minimize operating costs [16]. Operating cash flow is positive as opposed to investment and financial, which are negative. There is a significant difference between profit and cash flow; cash flow outpaces profit. Growing sales eliminate corporate risk. Mature companies pay large dividends and are not interested in new investments, which is in line with the findings of Damodaran [15] mentioned above. Although companies have easy access to debt at this stage, they also have to pay debt repayments and interest on debts earned in previous periods. Working capital accruals are low because investment in current assets is not significant [18]. This is also reflected in the year-on-year change in cash sales, which are much smaller than in other phases of the life cycle. The negative investment cash flow is also reflected in the lower value of fixed assets and the gradual decline in the value of depreciation. The shake-out stage is characterized by a gradual decline in profit and stabilization or decline in the number of companies in the market. Growth rates are also declining, causing stock prices to fall [13]. However, [14] points out that there is no clear evidence in the literature about the cash flow pattern of such companies. For this reason, any cash flow pattern other than those defined for the other stages of the life cycle falls within the shakeout. Businesses at this stage can, for example, extend their life cycle by investing in new products or also shrink the business (the company's strategy determines whether the investment cash flow will be outflow or inflow). In the last stage, the company has all financial characteristics (sales, profit, cash flow) declining; operating cash flow is negative. A negative outlook is not good for shareholders; stock prices are gradually falling. Debt is reduced as a result of debt repayment or the debt may be renegotiated (depending on the strategy, a financial outflow or inflow arises). Investment cash flow is positive because the company sells assets to satisfy debtholders [16]. Investment in working capital is minimal, but the company is trying to raise cash by reducing receivables; cash sales are the highest of all life cycle stages. The sale of assets also reduces the value of fixed assets and small-scale depreciation affects profit or cash flow [18]. The description of both described life cycle models shows that there are significant changes not only in the company's financial performance, but also there is room for profit manipulation (accrual and real earnings management). Moreover, earnings with minimal accounting manipulations are more informative for stakeholders and more predictable [19.20]. Chen [12] notes that accrual models have a higher explanatory power using the life cycle as one of the variables for listed Chinese companies. Zamrudah and Salman [21] found that in the growth phase, accruals play an important role in determining earnings management, while they are not essential in the decline phase. Hastuti, et al. [11] used a sample of Indonesian listed companies; however, according to their findings, the life cycle has no effect on the detection of earnings management. Dickinson, et al. [22] examined the relevance of profit in relation to market value prediction. Extreme performance determines the need for quality accounting reported information, especially in the Introduction and Decline phases. Previous studies have shown that earnings management and quality earnings vary over the life cycle; however, the records focus on Asian countries and listed companies. It is therefore necessary to examine whether the life cycle has an impact on earnings management in unlisted companies and in Central European countries. Methods The aim of this paper is to investigate whether the life cycle affects the position of earnings management in tourism companies in the countries of Central Europe (Visegrad Four). A review of the literature in the previous chapter showed that the financial characteristics of companies at different stages of the life cycle differ, not excluding factors indicating profit manipulation. In this study, we analysed two factors; in the first place the life cycle stages according to Dickinson [15], in the second place the Country factor. Businesses are divided by country because, based on differences in national tax policies and different accounting rules (regional IFRS / GAAP), we assume different levels of earnings management. The life cycle factor of a company is created on the basis of eight combinations of cash flow, which are divided into five categories corresponding to the stages of the life cycle (introduction, growth, mature, shake-out, decline). The categories and their cash flow patterns are described in Table 1. Neg. The quality of earnings can be examined in various ways as described in [6]. However, earnings management models are the most commonly used proxies that determine whether earnings are manipulated or not. A modified Jones model was chosen, which is one of the most frequently used. Its advantage is that, compared to the original Jones model, it eliminates type I and type II errors. Proxies for earnings management are residuals of an estimated modified Jones model that identify discretionary accruals. Both factors (qualitative variables) and proxies for earnings management are examined by two-way Anova. The advantage of this method, as opposed to one-way analysis of variance, is the ability to investigate the interaction between the two factors, which reduces the error term. This method is subject to several assumptions (normality of subsets, homogeneity of variances, independence of cases and no outliers). These assumptions are tested by appropriate tests (Kolmogorov-Smirnov test, Levene's test) and outliers were removed by wisorizing at 1% and 99% percentile. If the assumptions are not met, then the two-way Anova is tested on a robust basis (with robust standard errors). The regression form of Anova provides the model described in Eq. 1. A sample for the study was obtained from the Amadeus database. There are three selection conditions: classification in NACE main section I -Accommodation and food services activities, turnover higher than 100 000 euros in 2018 and registered office in one of Results and Discussion The first step in the study of the quality of accruals at various stages of the company's life cycle is the analysis of a sample. The gross sample contains 15,295 enterprises, however, enterprises with missing values of financial indicators were detected. These companies were removed from the sample. After the first step of the analysis, the sample had 3650 cases. Removal of outliers was done by wisorizing at 1% and 99% percentile. The advantage of this method is the fact that the sample is not reduced. The net sample contained 3650 companies; from their financial data, discretionary accruals were estimated and life cycle phases were assigned according to cash flow patterns. The estimated earnings management proxies were examined based on descriptive statistics as shown in Table 2. The mean values of earnings management show that there are differences in the quality of profit in different countries as well as in different phases of the life cycle. On average, Czech and Slovak tourism enterprises increase their reported profit through accruals much more significantly than in Poland than in Hungary. The latter countries manage profits on average to a very small extent. However, the standard deviations show that there is a significant variation in discretionary accruals and that many companies tend to reduce their profits. More broadly, the life-cycle analysis shows the differences between businesses, regardless of country of origin. The most manipulative companies are in the Introduction phase, where they show negative operating cash flow. At this stage, the accounting profit is significantly overestimated; even with regard to the standard deviation, companies use neutral earnings management or significantly overestimate profit over cash flow. In the Growth and Decline phases, while operating cash flows are different, earnings management has a similar positive value with respect to variance. Moreover, the Mature and Shake-out stages show that businesses have enough resources to report lower accounting profits. After reaching the break-even point in the Growth stage, there is an increase in profit and stabilization (elimination) of corporate risk. Stable profit allows companies to obtain additional resources for expansion with low cost of capital, which increases the importance of tax optimization (minimization) of profit. The analysis shows that companies at different stages of the life cycle change their behaviour regarding profit management and the SHS Web of Conferences 92, 0 (2021) Globalization and its Socio-Economic Consequences 2020 importance of tax and profit reduction is directly linked to the cash flow achieved. Nevertheless, descriptive statistics do not take into account the interaction between the country and the life cycle, which Anova examines in more depth. The first step in analysis of variance is to verify the model's assumptions. The normality of the subsamples was examined by histograms and tested by the Kolmogorov-Smirn test at the level of 0.05. As shown by the significant difference between the median and the mean (average), this assumption of the model is not fulfilled and the null hypothesis of normality of subsamples was rejected. Second, the assumption of homogeneity of variances would be tested by the Levene's test. Also the null hypothesis was rejected, the variances are heterogeneous. For this reason, a two-way ANOVA with robust standard deviations was used, which eliminates the occurrence of type I error. A summary Anova table for selected factors is shown in Table 3. The results of the model indicate that both main effects (Country and Life cycle) are significant and the quality of reported profit varies in different Central European countries as this variable varies depending on the reported cash flow pattern. However, both of these factors interact, i.e. the level of earnings management at different stages of the life cycle varies in the countries analysed. Partial Eta squared expresses the explanatory ability of the factor; the interaction between factors explains only 0.09% of the variability of discretionary accruals compared to the explanatory capacity of the life cycle proxy, which explains up to 4.2% of the variability. Weak interaction effect is shown in Fig. 1. Fig. 1 proves that there is an interaction between the analysed factors, but this interaction is weak because the lines in the graph are almost parallel. The differentiation of life cycle stages allows a deeper analysis of this factor with respect to the country. Czech and Slovak companies in tourism show similar values of earnings management in all phases of the life cycle. Similarly, Polish and Hungarian companies have similar characteristics, except for the last stages of the life cycle, in which Hungarian companies manipulate accounting profits to a greater extent. Significant differences between individual countries or phases of the life cycle in the graph are in line with the results in Tab. 2. The last phase of life cycle analysis and earnings management is verification of the difference between the quality of profit according to selected factors. As the Country factor had a weak explanatory power, the Life cycle factor was examined first as the main effect. In accordance with Fig. 2 it is confirmed that there are significant differences between the quality of profit in the life cycle stages, except for the mature and shake-out stages, where companies show an accounting decrease in profit that is not significantly affected by changes in cash flow pattern and market conditions. However, a deeper analysis of the quality of reported profits by country and life cycle shows deeper differences between enterprises, which is described in Table 4. Table 4 Summary of significant differences (post hoc tests) between earnings quality in the central European countries profit. However, it should be noted that these characteristics vary from country to country. With this in mind, the life cycle of a company can be an important variable in estimating the quality of profit of stakeholders and creating a model of earnings management of companies in Central European countries. Conclusion The current economic situation in Europe and in the world shows that a high percentage of companies are directly or indirectly at risk of bankruptcy; businesses are subject to a high risk of financial distress. The threat of bankruptcy highlights the need for quality financial statements and the minimization of discretionary accruals. The estimation of earnings management is conditioned by many factors, of which financial performance and the associated stage of the company's life cycle have a significant position. This study looked at the impact of the life cycle stage on the quality of earnings and sheds new light on the problem of estimating earnings management in the Visegrad Four countries. The sample of 3,650 tourism enterprises examined the impact of life cycle stages and country (accounting, tax and other macroeconomic aspects of the country) on the value of discretionary accruals according to the modified Jones model. The results of the two-way Anova model indicate that both mentioned influences have a significant effect on the level of profit manipulation; earnings management differs significantly at different stages of the life cycle. Mature and shake-out companies use more significant downward earnings management techniques, in other phases managers tend to increase profits. The country factor has a lower impact on differences in profit quality; Slovak and Czech companies increase accounting profit to a similar extent. On the contrary, Hungarian companies differ significantly from the others analysed, which may be due to different accounting rules or a significantly smaller size of the sub-samples. In general, it can be stated that the phase of the company's life cycle is one of the main qualitative influences of the level of earnings management, but it should be assessed at the same time as the country-specific conditions of the company. The inclusion of these two factors in the model could improve the explanatory power of current accrual earnings management or it can be a decisive factor in creating an earnings management model in the conditions of unlisted companies in Central and Eastern Europe. Since the focus of the study was on accrual based earnings management; the area of real earnings management from the point of view of life cycle is not examined. Given that real earnings management (REM) has an impact on cash flow in all its forms, it can also affect the inclusion of the company in the life cycle according to Dickinson [15]. Nevertheless, future work clarifying the phenomenon of life cycle and accrual and real earnings management will need to be performed.